=== RUN TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p ha-290859 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 --container-runtime=containerd
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-290859 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 --container-runtime=containerd: exit status 80 (1m13.948979388s)
-- stdout --
* [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20512
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting "ha-290859" primary control-plane node in "ha-290859" cluster
* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
* Found network options:
- NO_PROXY=192.168.39.110
* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
- env NO_PROXY=192.168.39.110
-- /stdout --
** stderr **
I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0414 14:28:44.858763 1213155 out.go:177] - MINIKUBE_LOCATION=20512
I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
I0414 14:28:44.861113 1213155 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 14:28:44.862568 1213155 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:28:44.864291 1213155 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:28:44.865558 1213155 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0414 14:28:44.866690 1213155 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0414 14:28:44.921581 1213155 start.go:340] cluster config:
{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml:
I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG | <name>mk-ha-290859</name>
I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG | <dns enable='no'/>
I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |
I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG | <dhcp>
I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG | </dhcp>
I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG | </ip>
I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |
I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG |
I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml:
I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859) <name>ha-290859</name>
I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859) <memory unit='MiB'>2200</memory>
I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859) <vcpu>2</vcpu>
I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859) <features>
I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859) <acpi/>
I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859) <apic/>
I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859) <pae/>
I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859) </features>
I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859) <cpu mode='host-passthrough'>
I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859) </cpu>
I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859) <os>
I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859) <type>hvm</type>
I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859) <boot dev='cdrom'/>
I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859) <boot dev='hd'/>
I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859) <bootmenu enable='no'/>
I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859) </os>
I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859) <devices>
I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859) <disk type='file' device='cdrom'>
I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859) <target dev='hdc' bus='scsi'/>
I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859) <readonly/>
I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859) </disk>
I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859) <disk type='file' device='disk'>
I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859) <driver name='qemu' type='raw' cache='default' io='threads' />
I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859) <target dev='hda' bus='virtio'/>
I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859) </disk>
I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859) <interface type='network'>
I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859) <source network='mk-ha-290859'/>
I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859) <model type='virtio'/>
I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859) </interface>
I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859) <interface type='network'>
I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859) <source network='default'/>
I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859) <model type='virtio'/>
I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859) </interface>
I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859) <serial type='pty'>
I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859) <target port='0'/>
I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859) </serial>
I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859) <console type='pty'>
I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859) <target type='serial' port='0'/>
I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859) </console>
I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859) <rng model='virtio'>
I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859) <backend model='random'>/dev/random</backend>
I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859) </rng>
I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859) </devices>
I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>:
I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
exit 0
I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-290859' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
else
echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts;
fi
fi
I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 14:29:09.901676 1213155 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.110
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "ha-290859"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.110"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.10
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0414 14:29:15.480841 1213155 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0414 14:29:26.377483 1213155 out.go:235] - Generating certificates and keys ...
I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0414 14:29:26.381520 1213155 out.go:235] - Booting up control plane ...
I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
I0414 14:29:26.384515 1213155 out.go:235] - Configuring RBAC rules ...
I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0414 14:29:26.385416 1213155 kubeadm.go:310]
I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0414 14:29:26.385475 1213155 kubeadm.go:310]
I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0414 14:29:26.385557 1213155 kubeadm.go:310]
I0414 14:29:26.385578 1213155 kubeadm.go:310] mkdir -p $HOME/.kube
I0414 14:29:26.385628 1213155 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0414 14:29:26.385686 1213155 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0414 14:29:26.385693 1213155 kubeadm.go:310]
I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0414 14:29:26.385752 1213155 kubeadm.go:310]
I0414 14:29:26.385800 1213155 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0414 14:29:26.385806 1213155 kubeadm.go:310]
I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0414 14:29:26.385993 1213155 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0414 14:29:26.385999 1213155 kubeadm.go:310]
I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0414 14:29:26.386143 1213155 kubeadm.go:310]
I0414 14:29:26.386219 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
I0414 14:29:26.386324 1213155 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
I0414 14:29:26.386357 1213155 kubeadm.go:310] --control-plane
I0414 14:29:26.386367 1213155 kubeadm.go:310]
I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0414 14:29:26.386481 1213155 kubeadm.go:310]
I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
I0414 14:29:26.386727 1213155 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b
I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file: /home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.785084 1213155 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
I0414 14:29:30.716785 1213155 round_trippers.go:480] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:29:30.716791 1213155 round_trippers.go:480] Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
I0414 14:29:30.730655 1213155 round_trippers.go:480] Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:29:30.730659 1213155 round_trippers.go:480] Content-Type: application/vnd.kubernetes.protobuf
I0414 14:29:30.730662 1213155 round_trippers.go:480] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
I0414 14:29:30.739993 1213155 out.go:201]
I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml:
I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02) <name>ha-290859-m02</name>
I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02) <memory unit='MiB'>2200</memory>
I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02) <vcpu>2</vcpu>
I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02) <features>
I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02) <acpi/>
I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02) <apic/>
I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02) <pae/>
I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02) </features>
I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02) <cpu mode='host-passthrough'>
I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02) </cpu>
I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02) <os>
I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02) <type>hvm</type>
I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02) <boot dev='cdrom'/>
I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02) <boot dev='hd'/>
I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02) <bootmenu enable='no'/>
I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02) </os>
I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02) <devices>
I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02) <disk type='file' device='cdrom'>
I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02) <target dev='hdc' bus='scsi'/>
I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02) <readonly/>
I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02) </disk>
I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02) <disk type='file' device='disk'>
I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02) <target dev='hda' bus='virtio'/>
I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02) </disk>
I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02) <interface type='network'>
I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02) <source network='mk-ha-290859'/>
I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02) <model type='virtio'/>
I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02) </interface>
I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02) <interface type='network'>
I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02) <source network='default'/>
I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02) <model type='virtio'/>
I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02) </interface>
I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02) <serial type='pty'>
I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02) <target port='0'/>
I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02) </serial>
I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02) <console type='pty'>
I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02) <target type='serial' port='0'/>
I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02) </console>
I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02) <rng model='virtio'>
I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02) <backend model='random'>/dev/random</backend>
I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02) </rng>
I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02) </devices>
I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>:
I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
exit 0
I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts;
fi
fi
I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
I0414 14:29:54.375127 1213155 out.go:177] - NO_PROXY=192.168.39.110
W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 14:29:56.403201 1213155 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
I0414 14:29:56.455785 1213155 out.go:177] - env NO_PROXY=192.168.39.110
I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.10
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
Initiating transfer...
I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
I0414 14:29:58.749359 1213155 out.go:201]
W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:0xc00012c550 https:0xc00012c5a0] Dir:false
ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
W0414 14:29:58.750801 1213155 out.go:270] *
*
W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 14:29:58.753070 1213155 out.go:201]
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 start -p ha-290859 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.175005073s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs:
-- stdout --
==> Audit <==
|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| ssh | functional-905978 ssh findmnt | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| mount | -p functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | /tmp/TestFunctionalparallelMountCmdspecific-port1389122606/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 --port 46464 | | | | | |
| ssh | functional-905978 ssh findmnt | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-905978 ssh -- ls | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | -la /mount-9p | | | | | |
| ssh | functional-905978 ssh sudo | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | umount -f /mount-9p | | | | | |
| mount | -p functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount1 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| mount | -p functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-905978 ssh findmnt | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | -T /mount1 | | | | | |
| mount | -p functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-905978 ssh findmnt | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | -T /mount1 | | | | | |
| ssh | functional-905978 ssh findmnt | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | -T /mount2 | | | | | |
| ssh | functional-905978 ssh findmnt | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | -T /mount3 | | | | | |
| mount | -p functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | --kill=true | | | | | |
| image | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | image ls --format short | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | image ls --format yaml | | | | | |
| | --alsologtostderr | | | | | |
| ssh | functional-905978 ssh pgrep | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | buildkitd | | | | | |
| image | functional-905978 image build -t | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | localhost/my-image:functional-905978 | | | | | |
| | testdata/build --alsologtostderr | | | | | |
| image | functional-905978 image ls | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| image | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | image ls --format json | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | image ls --format table | | | | | |
| | --alsologtostderr | | | | | |
| update-context | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| delete | -p functional-905978 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
| start | -p ha-290859 --wait=true | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | |
| | --memory=2200 --ha | | | | | |
| | -v=7 --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/14 14:28:44
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0414 14:28:44.858763 1213155 out.go:177] - MINIKUBE_LOCATION=20512
I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
I0414 14:28:44.861113 1213155 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 14:28:44.862568 1213155 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:28:44.864291 1213155 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:28:44.865558 1213155 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0414 14:28:44.866690 1213155 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0414 14:28:44.921581 1213155 start.go:340] cluster config:
{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml:
I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG | <name>mk-ha-290859</name>
I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG | <dns enable='no'/>
I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |
I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG | <dhcp>
I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG | </dhcp>
I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG | </ip>
I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |
I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG |
I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml:
I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859) <name>ha-290859</name>
I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859) <memory unit='MiB'>2200</memory>
I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859) <vcpu>2</vcpu>
I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859) <features>
I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859) <acpi/>
I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859) <apic/>
I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859) <pae/>
I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859) </features>
I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859) <cpu mode='host-passthrough'>
I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859) </cpu>
I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859) <os>
I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859) <type>hvm</type>
I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859) <boot dev='cdrom'/>
I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859) <boot dev='hd'/>
I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859) <bootmenu enable='no'/>
I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859) </os>
I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859) <devices>
I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859) <disk type='file' device='cdrom'>
I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859) <target dev='hdc' bus='scsi'/>
I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859) <readonly/>
I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859) </disk>
I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859) <disk type='file' device='disk'>
I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859) <driver name='qemu' type='raw' cache='default' io='threads' />
I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859) <target dev='hda' bus='virtio'/>
I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859) </disk>
I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859) <interface type='network'>
I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859) <source network='mk-ha-290859'/>
I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859) <model type='virtio'/>
I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859) </interface>
I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859) <interface type='network'>
I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859) <source network='default'/>
I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859) <model type='virtio'/>
I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859) </interface>
I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859) <serial type='pty'>
I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859) <target port='0'/>
I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859) </serial>
I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859) <console type='pty'>
I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859) <target type='serial' port='0'/>
I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859) </console>
I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859) <rng model='virtio'>
I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859) <backend model='random'>/dev/random</backend>
I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859) </rng>
I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859) </devices>
I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859)
I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>:
I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
exit 0
I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-290859' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
else
echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts;
fi
fi
I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.110 22 <nil> <nil>}
I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 14:29:09.901676 1213155 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.110
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "ha-290859"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.110"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.10
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0414 14:29:15.480841 1213155 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0414 14:29:26.377483 1213155 out.go:235] - Generating certificates and keys ...
I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0414 14:29:26.381520 1213155 out.go:235] - Booting up control plane ...
I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
I0414 14:29:26.384515 1213155 out.go:235] - Configuring RBAC rules ...
I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0414 14:29:26.385416 1213155 kubeadm.go:310]
I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0414 14:29:26.385475 1213155 kubeadm.go:310]
I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0414 14:29:26.385557 1213155 kubeadm.go:310]
I0414 14:29:26.385578 1213155 kubeadm.go:310] mkdir -p $HOME/.kube
I0414 14:29:26.385628 1213155 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0414 14:29:26.385686 1213155 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0414 14:29:26.385693 1213155 kubeadm.go:310]
I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0414 14:29:26.385752 1213155 kubeadm.go:310]
I0414 14:29:26.385800 1213155 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0414 14:29:26.385806 1213155 kubeadm.go:310]
I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0414 14:29:26.385993 1213155 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0414 14:29:26.385999 1213155 kubeadm.go:310]
I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0414 14:29:26.386143 1213155 kubeadm.go:310]
I0414 14:29:26.386219 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
I0414 14:29:26.386324 1213155 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
I0414 14:29:26.386357 1213155 kubeadm.go:310] --control-plane
I0414 14:29:26.386367 1213155 kubeadm.go:310]
I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0414 14:29:26.386481 1213155 kubeadm.go:310]
I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
I0414 14:29:26.386727 1213155 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b
I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file: /home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.785084 1213155 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
I0414 14:29:30.716785 1213155 round_trippers.go:480] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:29:30.716791 1213155 round_trippers.go:480] Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
I0414 14:29:30.730655 1213155 round_trippers.go:480] Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:29:30.730659 1213155 round_trippers.go:480] Content-Type: application/vnd.kubernetes.protobuf
I0414 14:29:30.730662 1213155 round_trippers.go:480] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
I0414 14:29:30.739993 1213155 out.go:201]
I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml:
I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02) <name>ha-290859-m02</name>
I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02) <memory unit='MiB'>2200</memory>
I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02) <vcpu>2</vcpu>
I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02) <features>
I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02) <acpi/>
I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02) <apic/>
I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02) <pae/>
I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02) </features>
I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02) <cpu mode='host-passthrough'>
I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02) </cpu>
I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02) <os>
I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02) <type>hvm</type>
I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02) <boot dev='cdrom'/>
I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02) <boot dev='hd'/>
I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02) <bootmenu enable='no'/>
I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02) </os>
I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02) <devices>
I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02) <disk type='file' device='cdrom'>
I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02) <target dev='hdc' bus='scsi'/>
I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02) <readonly/>
I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02) </disk>
I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02) <disk type='file' device='disk'>
I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02) <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02) <target dev='hda' bus='virtio'/>
I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02) </disk>
I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02) <interface type='network'>
I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02) <source network='mk-ha-290859'/>
I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02) <model type='virtio'/>
I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02) </interface>
I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02) <interface type='network'>
I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02) <source network='default'/>
I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02) <model type='virtio'/>
I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02) </interface>
I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02) <serial type='pty'>
I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02) <target port='0'/>
I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02) </serial>
I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02) <console type='pty'>
I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02) <target type='serial' port='0'/>
I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02) </console>
I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02) <rng model='virtio'>
I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02) <backend model='random'>/dev/random</backend>
I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02) </rng>
I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02) </devices>
I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02)
I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>:
I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
exit 0
I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts;
fi
fi
I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
I0414 14:29:54.375127 1213155 out.go:177] - NO_PROXY=192.168.39.110
W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 14:29:56.403201 1213155 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
I0414 14:29:56.455785 1213155 out.go:177] - env NO_PROXY=192.168.39.110
I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version 1
I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.10
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
Initiating transfer...
I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
I0414 14:29:58.749359 1213155 out.go:201]
W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
W0414 14:29:58.750801 1213155 out.go:270] *
W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 14:29:58.753070 1213155 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
731a9f2fe8645 c69fa2e9cbf5f 14 seconds ago Running coredns 0 e56d2e4c87eea coredns-668d6bf9bc-qnl6q
0ec0a3a234c7c c69fa2e9cbf5f 14 seconds ago Running coredns 0 2818c413e6e32 coredns-668d6bf9bc-wbn4p
922f97d06563e 6e38f40d628db 14 seconds ago Running storage-provisioner 0 4de376d34ee7f storage-provisioner
2df8ccb8d6ed9 df3849d954c98 26 seconds ago Running kindnet-cni 0 08244cfc780bd kindnet-hm99t
e22a81661302f f1332858868e1 29 seconds ago Running kube-proxy 0 f20a0bcfbd507 kube-proxy-cg945
9914f8879fc43 6ff023a402a69 37 seconds ago Running kube-vip 0 7b4e857fc4a72 kube-vip-ha-290859
8263b35014337 b6a454c5a800d 40 seconds ago Running kube-controller-manager 0 96ffccfabb2f0 kube-controller-manager-ha-290859
3607093f95b04 85b7a174738ba 40 seconds ago Running kube-apiserver 0 7d06c53c8318a kube-apiserver-ha-290859
b9d0c94204534 a9e7e6b294baf 40 seconds ago Running etcd 0 07c98c2ded11c etcd-ha-290859
341626ffff967 d8e673e7c9983 40 seconds ago Running kube-scheduler 0 d86edf81d4f34 kube-scheduler-ha-290859
==> containerd <==
Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.944257172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.944335026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991327229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991399429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991414789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991553876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.006971699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.007117025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.007134486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.015183713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.035724100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:a98bb55f-5a73-4436-82eb-ae7534928039,Namespace:kube-system,Attempt:0,} returns sandbox id \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.049022093Z" level=info msg="CreateContainer within sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.082712088Z" level=info msg="CreateContainer within sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.083397395Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.120635029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbn4p,Uid:5c2a6c8d-60f5-466d-8f59-f43a26cf06c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2818c413e6e32cda88d124ae36bfe42091bf5832b899e50c953444aea7c8118e\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.125584339Z" level=info msg="CreateContainer within sandbox \"2818c413e6e32cda88d124ae36bfe42091bf5832b899e50c953444aea7c8118e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.165622128Z" level=info msg="CreateContainer within sandbox \"2818c413e6e32cda88d124ae36bfe42091bf5832b899e50c953444aea7c8118e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
==> describe nodes <==
Name: ha-290859
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-290859
kubernetes.io/os=linux
minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
minikube.k8s.io/name=ha-290859
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 14 Apr 2025 14:29:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-290859
AcquireTime: <unset>
RenewTime: Mon, 14 Apr 2025 14:29:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 14 Apr 2025 14:29:56 +0000 Mon, 14 Apr 2025 14:29:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 14 Apr 2025 14:29:56 +0000 Mon, 14 Apr 2025 14:29:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 14 Apr 2025 14:29:56 +0000 Mon, 14 Apr 2025 14:29:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 14 Apr 2025 14:29:56 +0000 Mon, 14 Apr 2025 14:29:44 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.110
Hostname: ha-290859
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 0538f5775f954b3bbf6bc94e8eb6c49a
System UUID: 0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
Boot ID: 357ae105-a7f9-47b1-bf31-1c1aadedfe92
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-qnl6q 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 30s
kube-system coredns-668d6bf9bc-wbn4p 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 30s
kube-system etcd-ha-290859 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 34s
kube-system kindnet-hm99t 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 30s
kube-system kube-apiserver-ha-290859 250m (12%) 0 (0%) 0 (0%) 0 (0%) 34s
kube-system kube-controller-manager-ha-290859 200m (10%) 0 (0%) 0 (0%) 0 (0%) 34s
kube-system kube-proxy-cg945 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30s
kube-system kube-scheduler-ha-290859 100m (5%) 0 (0%) 0 (0%) 0 (0%) 34s
kube-system kube-vip-ha-290859 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 290Mi (13%) 390Mi (18%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 29s kube-proxy
Normal Starting 34s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 34s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 34s kubelet Node ha-290859 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 34s kubelet Node ha-290859 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 34s kubelet Node ha-290859 status is now: NodeHasSufficientPID
Normal RegisteredNode 31s node-controller Node ha-290859 event: Registered Node ha-290859 in Controller
Normal NodeReady 15s kubelet Node ha-290859 status is now: NodeReady
==> dmesg <==
[Apr14 14:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
[ +0.058894] kauditd_printk_skb: 1 callbacks suppressed
[ +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
[ +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
[ +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
[ +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
[ +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
[ +0.061817] kauditd_printk_skb: 158 callbacks suppressed
[ +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
[ +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
[ +0.054755] kauditd_printk_skb: 46 callbacks suppressed
[ +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
[ +0.092655] kauditd_printk_skb: 79 callbacks suppressed
[ +5.133260] kauditd_printk_skb: 36 callbacks suppressed
[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
{"level":"info","ts":"2025-04-14T14:29:20.934693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 1"}
{"level":"info","ts":"2025-04-14T14:29:20.934727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 1"}
{"level":"info","ts":"2025-04-14T14:29:20.934744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 2"}
{"level":"info","ts":"2025-04-14T14:29:20.934754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 2"}
{"level":"info","ts":"2025-04-14T14:29:20.934880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 2"}
{"level":"info","ts":"2025-04-14T14:29:20.934897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 2"}
{"level":"info","ts":"2025-04-14T14:29:20.938840Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
{"level":"info","ts":"2025-04-14T14:29:20.938875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-14T14:29:20.939017Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree' (duration: 161.141805ms)"],"step_count":1}
{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request' (duration: 84.378612ms)","trace[1003808847] 'compare' (duration: 71.52798ms)"],"step_count":2}
{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received' (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index' (duration: 72.100547ms)"],"step_count":2}
{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading' (duration: 120.384333ms)"],"step_count":1}
==> kernel <==
14:29:59 up 1 min, 0 users, load average: 0.20, 0.08, 0.03
Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
I0414 14:29:33.700839 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I0414 14:29:33.701358 1 main.go:139] hostIP = 192.168.39.110
podIP = 192.168.39.110
I0414 14:29:33.793646 1 main.go:148] setting mtu 1500 for CNI
I0414 14:29:33.793783 1 main.go:178] kindnetd IP family: "ipv4"
I0414 14:29:33.793875 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0414 14:29:34.500111 1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
add table inet kindnet-network-policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
, skipping network policies
I0414 14:29:44.503197 1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
I0414 14:29:44.503441 1 main.go:301] handling current node
I0414 14:29:54.509621 1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
I0414 14:29:54.509758 1 main.go:301] handling current node
==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
I0414 14:29:22.336292 1 policy_source.go:240] refreshing policies
E0414 14:29:22.338963 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0414 14:29:22.361649 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0414 14:29:22.361941 1 shared_informer.go:320] Caches are synced for configmaps
I0414 14:29:22.362262 1 aggregator.go:171] initial CRD sync complete...
I0414 14:29:22.362271 1 autoregister_controller.go:144] Starting autoregister controller
I0414 14:29:22.362276 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0414 14:29:22.362280 1 cache.go:39] Caches are synced for autoregister controller
I0414 14:29:22.378719 1 controller.go:615] quota admission added evaluator for: namespaces
I0414 14:29:22.457815 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0414 14:29:23.164003 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0414 14:29:23.168635 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0414 14:29:23.168816 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0414 14:29:23.763560 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0414 14:29:23.812117 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0414 14:29:23.884276 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0414 14:29:23.896601 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
I0414 14:29:23.897534 1 controller.go:615] quota admission added evaluator for: endpoints
I0414 14:29:23.902387 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0414 14:29:24.193931 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0414 14:29:25.780107 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0414 14:29:25.808820 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0414 14:29:25.816856 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0414 14:29:29.653221 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0414 14:29:29.756960 1 controller.go:615] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
I0414 14:29:28.843253 1 shared_informer.go:320] Caches are synced for deployment
I0414 14:29:28.844034 1 shared_informer.go:320] Caches are synced for persistent volume
I0414 14:29:28.844299 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0414 14:29:28.848906 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0414 14:29:28.849212 1 shared_informer.go:320] Caches are synced for garbage collector
I0414 14:29:28.849296 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I0414 14:29:28.849401 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I0414 14:29:28.849617 1 shared_informer.go:320] Caches are synced for resource quota
I0414 14:29:28.850996 1 shared_informer.go:320] Caches are synced for stateful set
I0414 14:29:29.000358 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
I0414 14:29:29.886420 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.420823ms"
I0414 14:29:29.906585 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.109075ms"
I0414 14:29:29.906712 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="88.01µs"
I0414 14:29:44.519476 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
I0414 14:29:44.534945 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
I0414 14:29:44.547691 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.626341ms"
I0414 14:29:44.559315 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.802µs"
I0414 14:29:44.571127 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.78µs"
I0414 14:29:44.594711 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="70.198µs"
I0414 14:29:45.825051 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.769469ms"
I0414 14:29:45.826885 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.591µs"
I0414 14:29:45.846118 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.808387ms"
I0414 14:29:45.849026 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.566µs"
I0414 14:29:48.846765 1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
I0414 14:29:56.189929 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0414 14:29:30.555771 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0414 14:29:30.580550 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
E0414 14:29:30.580640 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0414 14:29:30.617235 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0414 14:29:30.617293 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0414 14:29:30.617328 1 server_linux.go:170] "Using iptables Proxier"
I0414 14:29:30.620046 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0414 14:29:30.620989 1 server.go:497] "Version info" version="v1.32.2"
I0414 14:29:30.621018 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0414 14:29:30.625365 1 config.go:329] "Starting node config controller"
I0414 14:29:30.625863 1 shared_informer.go:313] Waiting for caches to sync for node config
I0414 14:29:30.628597 1 config.go:199] "Starting service config controller"
I0414 14:29:30.628644 1 shared_informer.go:313] Waiting for caches to sync for service config
I0414 14:29:30.628665 1 config.go:105] "Starting endpoint slice config controller"
I0414 14:29:30.628683 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0414 14:29:30.726314 1 shared_informer.go:320] Caches are synced for node config
I0414 14:29:30.729639 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0414 14:29:30.729680 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
E0414 14:29:22.288686 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 14:29:22.287191 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0414 14:29:22.288704 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 14:29:22.286394 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 14:29:22.288719 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0414 14:29:22.285771 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.108289 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0414 14:29:23.108351 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.153824 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0414 14:29:23.153954 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.203744 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0414 14:29:23.203977 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0414 14:29:23.367236 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 14:29:23.367550 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.396026 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0414 14:29:23.396243 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.401643 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0414 14:29:23.401820 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.425454 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0414 14:29:23.425684 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.433181 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0414 14:29:23.433222 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 14:29:23.457688 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0414 14:29:23.457949 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0414 14:29:25.662221 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 14 14:29:26 ha-290859 kubelet[1300]: I0414 14:29:26.859439 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ha-290859" podStartSLOduration=1.859425056 podStartE2EDuration="1.859425056s" podCreationTimestamp="2025-04-14 14:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:26.837835811 +0000 UTC m=+1.278826064" watchObservedRunningTime="2025-04-14 14:29:26.859425056 +0000 UTC m=+1.300415308"
Apr 14 14:29:26 ha-290859 kubelet[1300]: I0414 14:29:26.859604 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ha-290859" podStartSLOduration=1.859595615 podStartE2EDuration="1.859595615s" podCreationTimestamp="2025-04-14 14:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:26.859205988 +0000 UTC m=+1.300196243" watchObservedRunningTime="2025-04-14 14:29:26.859595615 +0000 UTC m=+1.300585870"
Apr 14 14:29:28 ha-290859 kubelet[1300]: I0414 14:29:28.789189 1300 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Apr 14 14:29:28 ha-290859 kubelet[1300]: I0414 14:29:28.790117 1300 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800169 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rxv\" (UniqueName: \"kubernetes.io/projected/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-kube-api-access-z2rxv\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800223 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd6869b-0b23-4901-b9fa-02d62196a4f0-lib-modules\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800244 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-cni-cfg\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800258 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-lib-modules\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800273 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcxv\" (UniqueName: \"kubernetes.io/projected/4bd6869b-0b23-4901-b9fa-02d62196a4f0-kube-api-access-ldcxv\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800291 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bd6869b-0b23-4901-b9fa-02d62196a4f0-kube-proxy\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800318 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd6869b-0b23-4901-b9fa-02d62196a4f0-xtables-lock\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800334 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-xtables-lock\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.927080 1300 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Apr 14 14:29:30 ha-290859 kubelet[1300]: I0414 14:29:30.759848 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cg945" podStartSLOduration=1.759822313 podStartE2EDuration="1.759822313s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:30.758021458 +0000 UTC m=+5.199011713" watchObservedRunningTime="2025-04-14 14:29:30.759822313 +0000 UTC m=+5.200812548"
Apr 14 14:29:38 ha-290859 kubelet[1300]: I0414 14:29:38.319236 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hm99t" podStartSLOduration=6.431741101 podStartE2EDuration="9.319165475s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="2025-04-14 14:29:30.4398268 +0000 UTC m=+4.880817048" lastFinishedPulling="2025-04-14 14:29:33.327251182 +0000 UTC m=+7.768241422" observedRunningTime="2025-04-14 14:29:33.777221168 +0000 UTC m=+8.218211403" watchObservedRunningTime="2025-04-14 14:29:38.319165475 +0000 UTC m=+12.760155728"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.505879 1300 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.603696 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a590080d-c4b1-4697-9849-ae6130e483a3-config-volume\") pod \"coredns-668d6bf9bc-qnl6q\" (UID: \"a590080d-c4b1-4697-9849-ae6130e483a3\") " pod="kube-system/coredns-668d6bf9bc-qnl6q"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.603889 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lng\" (UniqueName: \"kubernetes.io/projected/a590080d-c4b1-4697-9849-ae6130e483a3-kube-api-access-k9lng\") pod \"coredns-668d6bf9bc-qnl6q\" (UID: \"a590080d-c4b1-4697-9849-ae6130e483a3\") " pod="kube-system/coredns-668d6bf9bc-qnl6q"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604007 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggjh\" (UniqueName: \"kubernetes.io/projected/5c2a6c8d-60f5-466d-8f59-f43a26cf06c4-kube-api-access-sggjh\") pod \"coredns-668d6bf9bc-wbn4p\" (UID: \"5c2a6c8d-60f5-466d-8f59-f43a26cf06c4\") " pod="kube-system/coredns-668d6bf9bc-wbn4p"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604073 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2a6c8d-60f5-466d-8f59-f43a26cf06c4-config-volume\") pod \"coredns-668d6bf9bc-wbn4p\" (UID: \"5c2a6c8d-60f5-466d-8f59-f43a26cf06c4\") " pod="kube-system/coredns-668d6bf9bc-wbn4p"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604118 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a98bb55f-5a73-4436-82eb-ae7534928039-tmp\") pod \"storage-provisioner\" (UID: \"a98bb55f-5a73-4436-82eb-ae7534928039\") " pod="kube-system/storage-provisioner"
Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604163 1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnm4d\" (UniqueName: \"kubernetes.io/projected/a98bb55f-5a73-4436-82eb-ae7534928039-kube-api-access-xnm4d\") pod \"storage-provisioner\" (UID: \"a98bb55f-5a73-4436-82eb-ae7534928039\") " pod="kube-system/storage-provisioner"
Apr 14 14:29:45 ha-290859 kubelet[1300]: I0414 14:29:45.804448 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.804430214 podStartE2EDuration="15.804430214s" podCreationTimestamp="2025-04-14 14:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:45.792326929 +0000 UTC m=+20.233317179" watchObservedRunningTime="2025-04-14 14:29:45.804430214 +0000 UTC m=+20.245420469"
Apr 14 14:29:45 ha-290859 kubelet[1300]: I0414 14:29:45.830229 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wbn4p" podStartSLOduration=16.830170415 podStartE2EDuration="16.830170415s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:45.80569588 +0000 UTC m=+20.246686135" watchObservedRunningTime="2025-04-14 14:29:45.830170415 +0000 UTC m=+20.271160663"
Apr 14 14:29:45 ha-290859 kubelet[1300]: I0414 14:29:45.830711 1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qnl6q" podStartSLOduration=16.830651166 podStartE2EDuration="16.830651166s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:45.828813483 +0000 UTC m=+20.269803765" watchObservedRunningTime="2025-04-14 14:29:45.830651166 +0000 UTC m=+20.271641420"
==> storage-provisioner [922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b] <==
I0414 14:29:45.362622 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0414 14:29:45.429344 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0414 14:29:45.429932 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0414 14:29:45.442302 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0414 14:29:45.443637 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cd1340a-7958-40a2-8c68-004b8c8385a8", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b became leader
I0414 14:29:45.444610 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
I0414 14:29:45.546579 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run: kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (75.86s)