=== RUN TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p ha-333994 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 --container-runtime=containerd
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-333994 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 --container-runtime=containerd: exit status 80 (1m36.793951326s)
-- stdout --
* [ha-333994] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=19283
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19283-14409/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14409/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting "ha-333994" primary control-plane node in "ha-333994" cluster
* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Starting "ha-333994-m02" control-plane node in "ha-333994" cluster
* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
* Found network options:
- NO_PROXY=192.168.39.180
* Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
- env NO_PROXY=192.168.39.180
-- /stdout --
** stderr **
I0717 17:25:37.372173 31817 out.go:291] Setting OutFile to fd 1 ...
I0717 17:25:37.372300 31817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:25:37.372309 31817 out.go:304] Setting ErrFile to fd 2...
I0717 17:25:37.372316 31817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:25:37.372515 31817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14409/.minikube/bin
I0717 17:25:37.373068 31817 out.go:298] Setting JSON to false
I0717 17:25:37.373934 31817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4080,"bootTime":1721233057,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0717 17:25:37.373990 31817 start.go:139] virtualization: kvm guest
I0717 17:25:37.376261 31817 out.go:177] * [ha-333994] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0717 17:25:37.377830 31817 notify.go:220] Checking for updates...
I0717 17:25:37.377854 31817 out.go:177] - MINIKUBE_LOCATION=19283
I0717 17:25:37.379322 31817 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 17:25:37.380779 31817 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19283-14409/kubeconfig
I0717 17:25:37.382329 31817 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:25:37.383666 31817 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0717 17:25:37.384940 31817 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 17:25:37.386314 31817 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 17:25:37.420051 31817 out.go:177] * Using the kvm2 driver based on user configuration
I0717 17:25:37.421589 31817 start.go:297] selected driver: kvm2
I0717 17:25:37.421607 31817 start.go:901] validating driver "kvm2" against <nil>
I0717 17:25:37.421618 31817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 17:25:37.422327 31817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 17:25:37.422404 31817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14409/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0717 17:25:37.437115 31817 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0717 17:25:37.437156 31817 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0717 17:25:37.437363 31817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 17:25:37.437413 31817 cni.go:84] Creating CNI manager for ""
I0717 17:25:37.437423 31817 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0717 17:25:37.437432 31817 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0717 17:25:37.437478 31817 start.go:340] cluster config:
{Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 17:25:37.437562 31817 iso.go:125] acquiring lock: {Name:mk9ca422a70055a342d5e4afb354786e16c8e9d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 17:25:37.439313 31817 out.go:177] * Starting "ha-333994" primary control-plane node in "ha-333994" cluster
I0717 17:25:37.440697 31817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 17:25:37.440738 31817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4
I0717 17:25:37.440745 31817 cache.go:56] Caching tarball of preloaded images
I0717 17:25:37.440816 31817 preload.go:172] Found /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 17:25:37.440827 31817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
I0717 17:25:37.441104 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:25:37.441121 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json: {Name:mk758d67ae5c79043a711460bac8ff59da52dd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:25:37.441235 31817 start.go:360] acquireMachinesLock for ha-333994: {Name:mk0f74b853b0d6e269bf0c6a25c6edeb4f1994c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 17:25:37.441263 31817 start.go:364] duration metric: took 16.553µs to acquireMachinesLock for "ha-333994"
I0717 17:25:37.441278 31817 start.go:93] Provisioning new machine with config: &{Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 17:25:37.441331 31817 start.go:125] createHost starting for "" (driver="kvm2")
I0717 17:25:37.442904 31817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0717 17:25:37.443026 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:25:37.443066 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:25:37.456958 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
I0717 17:25:37.457401 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:25:37.457924 31817 main.go:141] libmachine: Using API Version 1
I0717 17:25:37.457953 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:25:37.458234 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:25:37.458399 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:37.458508 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:37.458638 31817 start.go:159] libmachine.API.Create for "ha-333994" (driver="kvm2")
I0717 17:25:37.458664 31817 client.go:168] LocalClient.Create starting
I0717 17:25:37.458690 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem
I0717 17:25:37.458718 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:25:37.458731 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:25:37.458776 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem
I0717 17:25:37.458792 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:25:37.458803 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:25:37.458817 31817 main.go:141] libmachine: Running pre-create checks...
I0717 17:25:37.458825 31817 main.go:141] libmachine: (ha-333994) Calling .PreCreateCheck
I0717 17:25:37.459073 31817 main.go:141] libmachine: (ha-333994) Calling .GetConfigRaw
I0717 17:25:37.459495 31817 main.go:141] libmachine: Creating machine...
I0717 17:25:37.459514 31817 main.go:141] libmachine: (ha-333994) Calling .Create
I0717 17:25:37.459622 31817 main.go:141] libmachine: (ha-333994) Creating KVM machine...
I0717 17:25:37.460734 31817 main.go:141] libmachine: (ha-333994) DBG | found existing default KVM network
I0717 17:25:37.461376 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:37.461245 31840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
I0717 17:25:37.461396 31817 main.go:141] libmachine: (ha-333994) DBG | created network xml:
I0717 17:25:37.461405 31817 main.go:141] libmachine: (ha-333994) DBG | <network>
I0717 17:25:37.461410 31817 main.go:141] libmachine: (ha-333994) DBG | <name>mk-ha-333994</name>
I0717 17:25:37.461416 31817 main.go:141] libmachine: (ha-333994) DBG | <dns enable='no'/>
I0717 17:25:37.461420 31817 main.go:141] libmachine: (ha-333994) DBG |
I0717 17:25:37.461438 31817 main.go:141] libmachine: (ha-333994) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0717 17:25:37.461448 31817 main.go:141] libmachine: (ha-333994) DBG | <dhcp>
I0717 17:25:37.461459 31817 main.go:141] libmachine: (ha-333994) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0717 17:25:37.461473 31817 main.go:141] libmachine: (ha-333994) DBG | </dhcp>
I0717 17:25:37.461490 31817 main.go:141] libmachine: (ha-333994) DBG | </ip>
I0717 17:25:37.461499 31817 main.go:141] libmachine: (ha-333994) DBG |
I0717 17:25:37.461508 31817 main.go:141] libmachine: (ha-333994) DBG | </network>
I0717 17:25:37.461513 31817 main.go:141] libmachine: (ha-333994) DBG |
I0717 17:25:37.467087 31817 main.go:141] libmachine: (ha-333994) DBG | trying to create private KVM network mk-ha-333994 192.168.39.0/24...
I0717 17:25:37.530969 31817 main.go:141] libmachine: (ha-333994) DBG | private KVM network mk-ha-333994 192.168.39.0/24 created
I0717 17:25:37.531012 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:37.530957 31840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:25:37.531029 31817 main.go:141] libmachine: (ha-333994) Setting up store path in /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994 ...
I0717 17:25:37.531050 31817 main.go:141] libmachine: (ha-333994) Building disk image from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
I0717 17:25:37.531153 31817 main.go:141] libmachine: (ha-333994) Downloading /home/jenkins/minikube-integration/19283-14409/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
I0717 17:25:37.769775 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:37.769643 31840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa...
I0717 17:25:38.127523 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:38.127394 31840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/ha-333994.rawdisk...
I0717 17:25:38.127548 31817 main.go:141] libmachine: (ha-333994) DBG | Writing magic tar header
I0717 17:25:38.127558 31817 main.go:141] libmachine: (ha-333994) DBG | Writing SSH key tar header
I0717 17:25:38.127566 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:38.127499 31840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994 ...
I0717 17:25:38.127579 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994
I0717 17:25:38.127621 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994 (perms=drwx------)
I0717 17:25:38.127638 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines (perms=drwxr-xr-x)
I0717 17:25:38.127649 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube (perms=drwxr-xr-x)
I0717 17:25:38.127659 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409 (perms=drwxrwxr-x)
I0717 17:25:38.127674 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0717 17:25:38.127685 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0717 17:25:38.127697 31817 main.go:141] libmachine: (ha-333994) Creating domain...
I0717 17:25:38.127708 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines
I0717 17:25:38.127720 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:25:38.127729 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409
I0717 17:25:38.127736 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0717 17:25:38.127763 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins
I0717 17:25:38.127774 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home
I0717 17:25:38.127787 31817 main.go:141] libmachine: (ha-333994) DBG | Skipping /home - not owner
I0717 17:25:38.128688 31817 main.go:141] libmachine: (ha-333994) define libvirt domain using xml:
I0717 17:25:38.128706 31817 main.go:141] libmachine: (ha-333994) <domain type='kvm'>
I0717 17:25:38.128716 31817 main.go:141] libmachine: (ha-333994) <name>ha-333994</name>
I0717 17:25:38.128724 31817 main.go:141] libmachine: (ha-333994) <memory unit='MiB'>2200</memory>
I0717 17:25:38.128733 31817 main.go:141] libmachine: (ha-333994) <vcpu>2</vcpu>
I0717 17:25:38.128743 31817 main.go:141] libmachine: (ha-333994) <features>
I0717 17:25:38.128752 31817 main.go:141] libmachine: (ha-333994) <acpi/>
I0717 17:25:38.128758 31817 main.go:141] libmachine: (ha-333994) <apic/>
I0717 17:25:38.128768 31817 main.go:141] libmachine: (ha-333994) <pae/>
I0717 17:25:38.128788 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.128800 31817 main.go:141] libmachine: (ha-333994) </features>
I0717 17:25:38.128818 31817 main.go:141] libmachine: (ha-333994) <cpu mode='host-passthrough'>
I0717 17:25:38.128833 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.128844 31817 main.go:141] libmachine: (ha-333994) </cpu>
I0717 17:25:38.128854 31817 main.go:141] libmachine: (ha-333994) <os>
I0717 17:25:38.128867 31817 main.go:141] libmachine: (ha-333994) <type>hvm</type>
I0717 17:25:38.128878 31817 main.go:141] libmachine: (ha-333994) <boot dev='cdrom'/>
I0717 17:25:38.128890 31817 main.go:141] libmachine: (ha-333994) <boot dev='hd'/>
I0717 17:25:38.128901 31817 main.go:141] libmachine: (ha-333994) <bootmenu enable='no'/>
I0717 17:25:38.128927 31817 main.go:141] libmachine: (ha-333994) </os>
I0717 17:25:38.128949 31817 main.go:141] libmachine: (ha-333994) <devices>
I0717 17:25:38.128960 31817 main.go:141] libmachine: (ha-333994) <disk type='file' device='cdrom'>
I0717 17:25:38.128974 31817 main.go:141] libmachine: (ha-333994) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/boot2docker.iso'/>
I0717 17:25:38.128988 31817 main.go:141] libmachine: (ha-333994) <target dev='hdc' bus='scsi'/>
I0717 17:25:38.128998 31817 main.go:141] libmachine: (ha-333994) <readonly/>
I0717 17:25:38.129007 31817 main.go:141] libmachine: (ha-333994) </disk>
I0717 17:25:38.129031 31817 main.go:141] libmachine: (ha-333994) <disk type='file' device='disk'>
I0717 17:25:38.129043 31817 main.go:141] libmachine: (ha-333994) <driver name='qemu' type='raw' cache='default' io='threads' />
I0717 17:25:38.129057 31817 main.go:141] libmachine: (ha-333994) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/ha-333994.rawdisk'/>
I0717 17:25:38.129067 31817 main.go:141] libmachine: (ha-333994) <target dev='hda' bus='virtio'/>
I0717 17:25:38.129079 31817 main.go:141] libmachine: (ha-333994) </disk>
I0717 17:25:38.129089 31817 main.go:141] libmachine: (ha-333994) <interface type='network'>
I0717 17:25:38.129098 31817 main.go:141] libmachine: (ha-333994) <source network='mk-ha-333994'/>
I0717 17:25:38.129109 31817 main.go:141] libmachine: (ha-333994) <model type='virtio'/>
I0717 17:25:38.129125 31817 main.go:141] libmachine: (ha-333994) </interface>
I0717 17:25:38.129143 31817 main.go:141] libmachine: (ha-333994) <interface type='network'>
I0717 17:25:38.129156 31817 main.go:141] libmachine: (ha-333994) <source network='default'/>
I0717 17:25:38.129166 31817 main.go:141] libmachine: (ha-333994) <model type='virtio'/>
I0717 17:25:38.129177 31817 main.go:141] libmachine: (ha-333994) </interface>
I0717 17:25:38.129185 31817 main.go:141] libmachine: (ha-333994) <serial type='pty'>
I0717 17:25:38.129197 31817 main.go:141] libmachine: (ha-333994) <target port='0'/>
I0717 17:25:38.129212 31817 main.go:141] libmachine: (ha-333994) </serial>
I0717 17:25:38.129237 31817 main.go:141] libmachine: (ha-333994) <console type='pty'>
I0717 17:25:38.129257 31817 main.go:141] libmachine: (ha-333994) <target type='serial' port='0'/>
I0717 17:25:38.129277 31817 main.go:141] libmachine: (ha-333994) </console>
I0717 17:25:38.129288 31817 main.go:141] libmachine: (ha-333994) <rng model='virtio'>
I0717 17:25:38.129301 31817 main.go:141] libmachine: (ha-333994) <backend model='random'>/dev/random</backend>
I0717 17:25:38.129310 31817 main.go:141] libmachine: (ha-333994) </rng>
I0717 17:25:38.129321 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.129333 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.129343 31817 main.go:141] libmachine: (ha-333994) </devices>
I0717 17:25:38.129353 31817 main.go:141] libmachine: (ha-333994) </domain>
I0717 17:25:38.129364 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.133746 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:7d:ea:ab in network default
I0717 17:25:38.134333 31817 main.go:141] libmachine: (ha-333994) Ensuring networks are active...
I0717 17:25:38.134354 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:38.134949 31817 main.go:141] libmachine: (ha-333994) Ensuring network default is active
I0717 17:25:38.135204 31817 main.go:141] libmachine: (ha-333994) Ensuring network mk-ha-333994 is active
I0717 17:25:38.135633 31817 main.go:141] libmachine: (ha-333994) Getting domain xml...
I0717 17:25:38.136245 31817 main.go:141] libmachine: (ha-333994) Creating domain...
I0717 17:25:39.310815 31817 main.go:141] libmachine: (ha-333994) Waiting to get IP...
I0717 17:25:39.311620 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:39.312037 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:39.312090 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:39.312036 31840 retry.go:31] will retry after 308.80623ms: waiting for machine to come up
I0717 17:25:39.622682 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:39.623065 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:39.623083 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:39.623047 31840 retry.go:31] will retry after 344.848861ms: waiting for machine to come up
I0717 17:25:39.969533 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:39.969924 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:39.969950 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:39.969868 31840 retry.go:31] will retry after 339.149265ms: waiting for machine to come up
I0717 17:25:40.310470 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:40.310889 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:40.310915 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:40.310855 31840 retry.go:31] will retry after 442.455692ms: waiting for machine to come up
I0717 17:25:40.754326 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:40.754769 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:40.754793 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:40.754727 31840 retry.go:31] will retry after 692.369602ms: waiting for machine to come up
I0717 17:25:41.448430 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:41.448821 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:41.448845 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:41.448784 31840 retry.go:31] will retry after 888.634073ms: waiting for machine to come up
I0717 17:25:42.338562 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:42.338956 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:42.338987 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:42.338917 31840 retry.go:31] will retry after 958.652231ms: waiting for machine to come up
I0717 17:25:43.299646 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:43.300036 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:43.300060 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:43.299996 31840 retry.go:31] will retry after 1.026520774s: waiting for machine to come up
I0717 17:25:44.328045 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:44.328353 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:44.328378 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:44.328319 31840 retry.go:31] will retry after 1.144606861s: waiting for machine to come up
I0717 17:25:45.474485 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:45.474883 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:45.474908 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:45.474852 31840 retry.go:31] will retry after 2.320040547s: waiting for machine to come up
I0717 17:25:47.796771 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:47.797227 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:47.797257 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:47.797189 31840 retry.go:31] will retry after 2.900412309s: waiting for machine to come up
I0717 17:25:50.701258 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:50.701734 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:50.701785 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:50.701700 31840 retry.go:31] will retry after 2.901702791s: waiting for machine to come up
I0717 17:25:53.605129 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:53.605559 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:53.605577 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:53.605522 31840 retry.go:31] will retry after 3.63399522s: waiting for machine to come up
I0717 17:25:57.240563 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.240970 31817 main.go:141] libmachine: (ha-333994) Found IP for machine: 192.168.39.180
I0717 17:25:57.241006 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has current primary IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.241016 31817 main.go:141] libmachine: (ha-333994) Reserving static IP address...
I0717 17:25:57.241422 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find host DHCP lease matching {name: "ha-333994", mac: "52:54:00:73:4b:68", ip: "192.168.39.180"} in network mk-ha-333994
I0717 17:25:57.311172 31817 main.go:141] libmachine: (ha-333994) DBG | Getting to WaitForSSH function...
I0717 17:25:57.311209 31817 main.go:141] libmachine: (ha-333994) Reserved static IP address: 192.168.39.180
I0717 17:25:57.311222 31817 main.go:141] libmachine: (ha-333994) Waiting for SSH to be available...
I0717 17:25:57.313438 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.313869 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.313914 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.313935 31817 main.go:141] libmachine: (ha-333994) DBG | Using SSH client type: external
I0717 17:25:57.313972 31817 main.go:141] libmachine: (ha-333994) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa (-rw-------)
I0717 17:25:57.314013 31817 main.go:141] libmachine: (ha-333994) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa -p 22] /usr/bin/ssh <nil>}
I0717 17:25:57.314051 31817 main.go:141] libmachine: (ha-333994) DBG | About to run SSH command:
I0717 17:25:57.314064 31817 main.go:141] libmachine: (ha-333994) DBG | exit 0
I0717 17:25:57.442005 31817 main.go:141] libmachine: (ha-333994) DBG | SSH cmd err, output: <nil>:
I0717 17:25:57.442249 31817 main.go:141] libmachine: (ha-333994) KVM machine creation complete!
I0717 17:25:57.442580 31817 main.go:141] libmachine: (ha-333994) Calling .GetConfigRaw
I0717 17:25:57.443082 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:57.443285 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:57.443431 31817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0717 17:25:57.443445 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:25:57.444683 31817 main.go:141] libmachine: Detecting operating system of created instance...
I0717 17:25:57.444702 31817 main.go:141] libmachine: Waiting for SSH to be available...
I0717 17:25:57.444710 31817 main.go:141] libmachine: Getting to WaitForSSH function...
I0717 17:25:57.444718 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.446779 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.447118 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.447145 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.447285 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.447420 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.447569 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.447686 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.447850 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.448075 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.448086 31817 main.go:141] libmachine: About to run SSH command:
exit 0
I0717 17:25:57.561413 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:25:57.561435 31817 main.go:141] libmachine: Detecting the provisioner...
I0717 17:25:57.561444 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.564006 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.564331 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.564353 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.564530 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.564739 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.564886 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.565046 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.565213 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.565388 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.565402 31817 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0717 17:25:57.678978 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0717 17:25:57.679062 31817 main.go:141] libmachine: found compatible host: buildroot
I0717 17:25:57.679075 31817 main.go:141] libmachine: Provisioning with buildroot...
I0717 17:25:57.679085 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:57.679397 31817 buildroot.go:166] provisioning hostname "ha-333994"
I0717 17:25:57.679418 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:57.679587 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.682101 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.682468 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.682497 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.682625 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.682902 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.683088 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.683236 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.683384 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.683567 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.683582 31817 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-333994 && echo "ha-333994" | sudo tee /etc/hostname
I0717 17:25:57.808613 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-333994
I0717 17:25:57.808643 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.811150 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.811462 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.811484 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.811633 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.811819 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.811975 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.812114 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.812259 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.812470 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.812492 31817 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-333994' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-333994/g' /etc/hosts;
else
echo '127.0.1.1 ha-333994' | sudo tee -a /etc/hosts;
fi
fi
I0717 17:25:57.935982 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:25:57.936010 31817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14409/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14409/.minikube}
I0717 17:25:57.936045 31817 buildroot.go:174] setting up certificates
I0717 17:25:57.936053 31817 provision.go:84] configureAuth start
I0717 17:25:57.936064 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:57.936323 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:25:57.938795 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.939097 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.939122 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.939256 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.941132 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.941439 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.941465 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.941555 31817 provision.go:143] copyHostCerts
I0717 17:25:57.941591 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:25:57.941628 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem, removing ...
I0717 17:25:57.941644 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:25:57.941723 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem (1082 bytes)
I0717 17:25:57.941842 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:25:57.941865 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem, removing ...
I0717 17:25:57.941872 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:25:57.941911 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem (1123 bytes)
I0717 17:25:57.941974 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:25:57.942004 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem, removing ...
I0717 17:25:57.942014 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:25:57.942052 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem (1679 bytes)
I0717 17:25:57.942132 31817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem org=jenkins.ha-333994 san=[127.0.0.1 192.168.39.180 ha-333994 localhost minikube]
I0717 17:25:58.111694 31817 provision.go:177] copyRemoteCerts
I0717 17:25:58.111759 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 17:25:58.111785 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.114260 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.114541 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.114565 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.114746 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.114900 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.115022 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.115159 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.204834 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 17:25:58.204915 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0717 17:25:58.233451 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 17:25:58.233504 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0717 17:25:58.260715 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 17:25:58.260793 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 17:25:58.288074 31817 provision.go:87] duration metric: took 352.00837ms to configureAuth
I0717 17:25:58.288100 31817 buildroot.go:189] setting minikube options for container-runtime
I0717 17:25:58.288281 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:25:58.288301 31817 main.go:141] libmachine: Checking connection to Docker...
I0717 17:25:58.288311 31817 main.go:141] libmachine: (ha-333994) Calling .GetURL
I0717 17:25:58.289444 31817 main.go:141] libmachine: (ha-333994) DBG | Using libvirt version 6000000
I0717 17:25:58.291569 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.291932 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.291957 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.292117 31817 main.go:141] libmachine: Docker is up and running!
I0717 17:25:58.292130 31817 main.go:141] libmachine: Reticulating splines...
I0717 17:25:58.292136 31817 client.go:171] duration metric: took 20.833465773s to LocalClient.Create
I0717 17:25:58.292154 31817 start.go:167] duration metric: took 20.833518022s to libmachine.API.Create "ha-333994"
I0717 17:25:58.292162 31817 start.go:293] postStartSetup for "ha-333994" (driver="kvm2")
I0717 17:25:58.292170 31817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 17:25:58.292186 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.292380 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 17:25:58.292412 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.294705 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.294988 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.295011 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.295156 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.295308 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.295448 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.295547 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.380876 31817 ssh_runner.go:195] Run: cat /etc/os-release
I0717 17:25:58.385479 31817 info.go:137] Remote host: Buildroot 2023.02.9
I0717 17:25:58.385504 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/addons for local assets ...
I0717 17:25:58.385563 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/files for local assets ...
I0717 17:25:58.385657 31817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> 216612.pem in /etc/ssl/certs
I0717 17:25:58.385670 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /etc/ssl/certs/216612.pem
I0717 17:25:58.385792 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 17:25:58.395135 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:25:58.422415 31817 start.go:296] duration metric: took 130.238563ms for postStartSetup
I0717 17:25:58.422468 31817 main.go:141] libmachine: (ha-333994) Calling .GetConfigRaw
I0717 17:25:58.423096 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:25:58.425440 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.425742 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.425767 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.426007 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:25:58.426198 31817 start.go:128] duration metric: took 20.984856664s to createHost
I0717 17:25:58.426221 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.428248 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.428511 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.428538 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.428637 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.428826 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.428930 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.429005 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.429097 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:58.429257 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:58.429266 31817 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0717 17:25:58.543836 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237158.504657493
I0717 17:25:58.543858 31817 fix.go:216] guest clock: 1721237158.504657493
I0717 17:25:58.543867 31817 fix.go:229] Guest: 2024-07-17 17:25:58.504657493 +0000 UTC Remote: 2024-07-17 17:25:58.426211523 +0000 UTC m=+21.086147695 (delta=78.44597ms)
I0717 17:25:58.543886 31817 fix.go:200] guest clock delta is within tolerance: 78.44597ms
I0717 17:25:58.543891 31817 start.go:83] releasing machines lock for "ha-333994", held for 21.102620399s
I0717 17:25:58.543907 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.544173 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:25:58.546693 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.547047 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.547072 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.547197 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.547654 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.547823 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.547916 31817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 17:25:58.547962 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.548054 31817 ssh_runner.go:195] Run: cat /version.json
I0717 17:25:58.548080 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.550378 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.550648 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.550679 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.550978 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.550982 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.551129 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.551187 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.551227 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.551240 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.551305 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.551318 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.551480 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.551686 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.552927 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.654133 31817 ssh_runner.go:195] Run: systemctl --version
I0717 17:25:58.660072 31817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0717 17:25:58.665532 31817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 17:25:58.665586 31817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 17:25:58.682884 31817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 17:25:58.682906 31817 start.go:495] detecting cgroup driver to use...
I0717 17:25:58.682966 31817 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 17:25:58.710921 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 17:25:58.724815 31817 docker.go:217] disabling cri-docker service (if available) ...
I0717 17:25:58.724862 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 17:25:58.738870 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 17:25:58.752912 31817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 17:25:58.873905 31817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 17:25:59.009226 31817 docker.go:233] disabling docker service ...
I0717 17:25:59.009286 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 17:25:59.024317 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 17:25:59.037729 31817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 17:25:59.178928 31817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 17:25:59.308950 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 17:25:59.322702 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 17:25:59.341915 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 17:25:59.352890 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 17:25:59.363450 31817 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 17:25:59.363513 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 17:25:59.374006 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:25:59.384984 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 17:25:59.395933 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:25:59.406370 31817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 17:25:59.416834 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 17:25:59.427824 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 17:25:59.438419 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 17:25:59.448933 31817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 17:25:59.458271 31817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0717 17:25:59.458321 31817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0717 17:25:59.471288 31817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 17:25:59.480733 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:25:59.597561 31817 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 17:25:59.625448 31817 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 17:25:59.625540 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:25:59.630090 31817 retry.go:31] will retry after 1.114753424s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0717 17:26:00.745398 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:26:00.750563 31817 start.go:563] Will wait 60s for crictl version
I0717 17:26:00.750619 31817 ssh_runner.go:195] Run: which crictl
I0717 17:26:00.754270 31817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 17:26:00.794015 31817 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.19
RuntimeApiVersion: v1
I0717 17:26:00.794075 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:00.821370 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:00.850476 31817 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
I0717 17:26:00.851699 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:26:00.854267 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:00.854598 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:00.854625 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:00.854810 31817 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0717 17:26:00.858914 31817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 17:26:00.872028 31817 kubeadm.go:883] updating cluster {Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 17:26:00.872129 31817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 17:26:00.872173 31817 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:26:00.904349 31817 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
I0717 17:26:00.904418 31817 ssh_runner.go:195] Run: which lz4
I0717 17:26:00.908264 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0717 17:26:00.908363 31817 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0717 17:26:00.912476 31817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0717 17:26:00.912508 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (394473408 bytes)
I0717 17:26:02.292043 31817 containerd.go:563] duration metric: took 1.383715694s to copy over tarball
I0717 17:26:02.292124 31817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0717 17:26:04.380435 31817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088281526s)
I0717 17:26:04.380473 31817 containerd.go:570] duration metric: took 2.088397847s to extract the tarball
I0717 17:26:04.380483 31817 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0717 17:26:04.417289 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:26:04.532503 31817 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 17:26:04.562019 31817 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:26:04.594139 31817 retry.go:31] will retry after 159.715137ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2024-07-17T17:26:04Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0717 17:26:04.754516 31817 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:26:04.790521 31817 containerd.go:627] all images are preloaded for containerd runtime.
I0717 17:26:04.790541 31817 cache_images.go:84] Images are preloaded, skipping loading
I0717 17:26:04.790548 31817 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.30.2 containerd true true} ...
I0717 17:26:04.790647 31817 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-333994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 17:26:04.790702 31817 ssh_runner.go:195] Run: sudo crictl info
I0717 17:26:04.826334 31817 cni.go:84] Creating CNI manager for ""
I0717 17:26:04.826357 31817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0717 17:26:04.826364 31817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 17:26:04.826385 31817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-333994 NodeName:ha-333994 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0717 17:26:04.826538 31817 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.180
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "ha-333994"
kubeletExtraArgs:
node-ip: 192.168.39.180
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 17:26:04.826560 31817 kube-vip.go:115] generating kube-vip config ...
I0717 17:26:04.826608 31817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0717 17:26:04.845088 31817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0717 17:26:04.845186 31817 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0717 17:26:04.845237 31817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 17:26:04.855420 31817 binaries.go:44] Found k8s binaries, skipping transfer
I0717 17:26:04.855490 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0717 17:26:04.865095 31817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
I0717 17:26:04.882653 31817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 17:26:04.899447 31817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
I0717 17:26:04.917467 31817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
I0717 17:26:04.934831 31817 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0717 17:26:04.938924 31817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 17:26:04.951512 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:26:05.064475 31817 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 17:26:05.091657 31817 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994 for IP: 192.168.39.180
I0717 17:26:05.091681 31817 certs.go:194] generating shared ca certs ...
I0717 17:26:05.091701 31817 certs.go:226] acquiring lock for ca certs: {Name:mkbd59c659d87951ff3ee355cd9afc07084cc973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.091873 31817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key
I0717 17:26:05.091927 31817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key
I0717 17:26:05.091942 31817 certs.go:256] generating profile certs ...
I0717 17:26:05.092017 31817 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key
I0717 17:26:05.092036 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt with IP's: []
I0717 17:26:05.333847 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt ...
I0717 17:26:05.333874 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt: {Name:mk777cbb40105a68e3f77323fe294b684956fe92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.334027 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key ...
I0717 17:26:05.334037 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key: {Name:mk5d028eb3d5165101367caeb298d78e1ef97418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.334107 31817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e
I0717 17:26:05.334145 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180 192.168.39.254]
I0717 17:26:05.424786 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e ...
I0717 17:26:05.424814 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e: {Name:mk0136c8aa6e3dcb0178d33e23c8a472c3572950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.424956 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e ...
I0717 17:26:05.424968 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e: {Name:mk21a2bd5914e6b9398865902ece829e628c40ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.425035 31817 certs.go:381] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt
I0717 17:26:05.425116 31817 certs.go:385] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key
I0717 17:26:05.425167 31817 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key
I0717 17:26:05.425180 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt with IP's: []
I0717 17:26:05.709359 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt ...
I0717 17:26:05.709387 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt: {Name:mk00da479f15831c3fb1174ab8fe01112b152616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.709526 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key ...
I0717 17:26:05.709536 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key: {Name:mk48280e7c358eaec39922f30f6427d18e40d4e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.709599 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0717 17:26:05.709615 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0717 17:26:05.709625 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0717 17:26:05.709637 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0717 17:26:05.709649 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0717 17:26:05.709664 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0717 17:26:05.709675 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0717 17:26:05.709686 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0717 17:26:05.709732 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem (1338 bytes)
W0717 17:26:05.709772 31817 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661_empty.pem, impossibly tiny 0 bytes
I0717 17:26:05.709781 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem (1679 bytes)
I0717 17:26:05.709804 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem (1082 bytes)
I0717 17:26:05.709828 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem (1123 bytes)
I0717 17:26:05.709854 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem (1679 bytes)
I0717 17:26:05.709889 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:26:05.709937 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /usr/share/ca-certificates/216612.pem
I0717 17:26:05.709953 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:05.709962 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem -> /usr/share/ca-certificates/21661.pem
I0717 17:26:05.710499 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 17:26:05.736286 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0717 17:26:05.762624 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 17:26:05.789813 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 17:26:05.816731 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0717 17:26:05.843922 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0717 17:26:05.890090 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 17:26:05.917641 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 17:26:05.942689 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /usr/share/ca-certificates/216612.pem (1708 bytes)
I0717 17:26:05.968245 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 17:26:05.991503 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem --> /usr/share/ca-certificates/21661.pem (1338 bytes)
I0717 17:26:06.014644 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 17:26:06.030964 31817 ssh_runner.go:195] Run: openssl version
I0717 17:26:06.036668 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216612.pem && ln -fs /usr/share/ca-certificates/216612.pem /etc/ssl/certs/216612.pem"
I0717 17:26:06.047444 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216612.pem
I0717 17:26:06.051872 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:21 /usr/share/ca-certificates/216612.pem
I0717 17:26:06.051933 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216612.pem
I0717 17:26:06.057696 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/216612.pem /etc/ssl/certs/3ec20f2e.0"
I0717 17:26:06.068885 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 17:26:06.079816 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:06.084516 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:13 /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:06.084582 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:06.090194 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 17:26:06.100911 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21661.pem && ln -fs /usr/share/ca-certificates/21661.pem /etc/ssl/certs/21661.pem"
I0717 17:26:06.112203 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21661.pem
I0717 17:26:06.116753 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:21 /usr/share/ca-certificates/21661.pem
I0717 17:26:06.116812 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21661.pem
I0717 17:26:06.122686 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21661.pem /etc/ssl/certs/51391683.0"
I0717 17:26:06.133462 31817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 17:26:06.137718 31817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0717 17:26:06.137774 31817 kubeadm.go:392] StartCluster: {Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 17:26:06.137852 31817 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0717 17:26:06.137906 31817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0717 17:26:06.181182 31817 cri.go:89] found id: ""
I0717 17:26:06.181252 31817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 17:26:06.191588 31817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0717 17:26:06.201776 31817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0717 17:26:06.211610 31817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0717 17:26:06.211628 31817 kubeadm.go:157] found existing configuration files:
I0717 17:26:06.211668 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0717 17:26:06.221376 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0717 17:26:06.221428 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0717 17:26:06.231162 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0717 17:26:06.240465 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0717 17:26:06.240520 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0717 17:26:06.250464 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0717 17:26:06.260016 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0717 17:26:06.260071 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0717 17:26:06.269931 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0717 17:26:06.279357 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0717 17:26:06.279423 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0717 17:26:06.289124 31817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0717 17:26:06.540765 31817 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0717 17:26:16.854837 31817 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
I0717 17:26:16.854895 31817 kubeadm.go:310] [preflight] Running pre-flight checks
I0717 17:26:16.854996 31817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0717 17:26:16.855136 31817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0717 17:26:16.855227 31817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0717 17:26:16.855281 31817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0717 17:26:16.856908 31817 out.go:204] - Generating certificates and keys ...
I0717 17:26:16.856974 31817 kubeadm.go:310] [certs] Using existing ca certificate authority
I0717 17:26:16.857030 31817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0717 17:26:16.857098 31817 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0717 17:26:16.857147 31817 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0717 17:26:16.857206 31817 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0717 17:26:16.857246 31817 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0717 17:26:16.857299 31817 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0717 17:26:16.857447 31817 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-333994 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
I0717 17:26:16.857539 31817 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0717 17:26:16.857713 31817 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-333994 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
I0717 17:26:16.857815 31817 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0717 17:26:16.857909 31817 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0717 17:26:16.857973 31817 kubeadm.go:310] [certs] Generating "sa" key and public key
I0717 17:26:16.858063 31817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0717 17:26:16.858158 31817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0717 17:26:16.858237 31817 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0717 17:26:16.858285 31817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0717 17:26:16.858338 31817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0717 17:26:16.858384 31817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0717 17:26:16.858464 31817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0717 17:26:16.858535 31817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0717 17:26:16.860941 31817 out.go:204] - Booting up control plane ...
I0717 17:26:16.861023 31817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0717 17:26:16.861114 31817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0717 17:26:16.861201 31817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0717 17:26:16.861312 31817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0717 17:26:16.861419 31817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0717 17:26:16.861463 31817 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0717 17:26:16.861573 31817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0717 17:26:16.861661 31817 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0717 17:26:16.861750 31817 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.96481ms
I0717 17:26:16.861834 31817 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0717 17:26:16.861884 31817 kubeadm.go:310] [api-check] The API server is healthy after 5.974489427s
I0717 17:26:16.862127 31817 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0717 17:26:16.862266 31817 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0717 17:26:16.862320 31817 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0717 17:26:16.862517 31817 kubeadm.go:310] [mark-control-plane] Marking the node ha-333994 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0717 17:26:16.862583 31817 kubeadm.go:310] [bootstrap-token] Using token: nha8at.aampri4d84mofmvm
I0717 17:26:16.863863 31817 out.go:204] - Configuring RBAC rules ...
I0717 17:26:16.863958 31817 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0717 17:26:16.864053 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0717 17:26:16.864187 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0717 17:26:16.864354 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0717 17:26:16.864468 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0717 17:26:16.864606 31817 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0717 17:26:16.864779 31817 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0717 17:26:16.864819 31817 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0717 17:26:16.864861 31817 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0717 17:26:16.864867 31817 kubeadm.go:310]
I0717 17:26:16.864915 31817 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0717 17:26:16.864921 31817 kubeadm.go:310]
I0717 17:26:16.864989 31817 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0717 17:26:16.865003 31817 kubeadm.go:310]
I0717 17:26:16.865036 31817 kubeadm.go:310] mkdir -p $HOME/.kube
I0717 17:26:16.865087 31817 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0717 17:26:16.865148 31817 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0717 17:26:16.865158 31817 kubeadm.go:310]
I0717 17:26:16.865241 31817 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0717 17:26:16.865256 31817 kubeadm.go:310]
I0717 17:26:16.865326 31817 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0717 17:26:16.865337 31817 kubeadm.go:310]
I0717 17:26:16.865412 31817 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0717 17:26:16.865511 31817 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0717 17:26:16.865586 31817 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0717 17:26:16.865592 31817 kubeadm.go:310]
I0717 17:26:16.865681 31817 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0717 17:26:16.865783 31817 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0717 17:26:16.865794 31817 kubeadm.go:310]
I0717 17:26:16.865910 31817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nha8at.aampri4d84mofmvm \
I0717 17:26:16.866069 31817 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a60e42bdf4c234276b18cf44d8d4bb8b184659f5dc63b21861fc880bef0ea484 \
I0717 17:26:16.866105 31817 kubeadm.go:310] --control-plane
I0717 17:26:16.866127 31817 kubeadm.go:310]
I0717 17:26:16.866222 31817 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0717 17:26:16.866229 31817 kubeadm.go:310]
I0717 17:26:16.866315 31817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nha8at.aampri4d84mofmvm \
I0717 17:26:16.866474 31817 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a60e42bdf4c234276b18cf44d8d4bb8b184659f5dc63b21861fc880bef0ea484
I0717 17:26:16.866487 31817 cni.go:84] Creating CNI manager for ""
I0717 17:26:16.866496 31817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0717 17:26:16.867885 31817 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0717 17:26:16.868963 31817 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0717 17:26:16.874562 31817 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
I0717 17:26:16.874582 31817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0717 17:26:16.893967 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0717 17:26:17.240919 31817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0717 17:26:17.241000 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:17.241050 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-333994 minikube.k8s.io/updated_at=2024_07_17T17_26_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-333994 minikube.k8s.io/primary=true
I0717 17:26:17.265880 31817 ops.go:34] apiserver oom_adj: -16
I0717 17:26:17.373587 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:17.874354 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:18.374127 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:18.874198 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:19.374489 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:19.874572 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:20.373924 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:20.874355 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:21.373893 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:21.874071 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:22.374000 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:22.873730 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:23.374382 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:23.874233 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:24.374181 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:24.874599 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:25.374533 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:25.874592 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:26.373806 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:26.874333 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:27.373913 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:27.874327 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:28.373877 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:28.873887 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:29.374632 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:29.874052 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:30.024970 31817 kubeadm.go:1113] duration metric: took 12.784009766s to wait for elevateKubeSystemPrivileges
I0717 17:26:30.025013 31817 kubeadm.go:394] duration metric: took 23.887240562s to StartCluster
I0717 17:26:30.025031 31817 settings.go:142] acquiring lock: {Name:mk91c7387a23a84a0d90c1f4a8be889afd5f8e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:30.025112 31817 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19283-14409/kubeconfig
I0717 17:26:30.026088 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/kubeconfig: {Name:mkcf3eba146eb28d296552e24aa3055bdbdcc231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:30.026357 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0717 17:26:30.026385 31817 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 17:26:30.026411 31817 start.go:241] waiting for startup goroutines ...
I0717 17:26:30.026428 31817 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0717 17:26:30.026497 31817 addons.go:69] Setting storage-provisioner=true in profile "ha-333994"
I0717 17:26:30.026512 31817 addons.go:69] Setting default-storageclass=true in profile "ha-333994"
I0717 17:26:30.026541 31817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-333994"
I0717 17:26:30.026571 31817 addons.go:234] Setting addon storage-provisioner=true in "ha-333994"
I0717 17:26:30.026609 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:30.026621 31817 host.go:66] Checking if "ha-333994" exists ...
I0717 17:26:30.026938 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.026980 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.026991 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.027043 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.041651 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
I0717 17:26:30.042154 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
I0717 17:26:30.042786 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.043559 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.043586 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.043583 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.044032 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.044132 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.044154 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.044459 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.044627 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:30.045452 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.045489 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.046872 31817 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19283-14409/kubeconfig
I0717 17:26:30.047164 31817 kapi.go:59] client config for ha-333994: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0717 17:26:30.047615 31817 cert_rotation.go:137] Starting client certificate rotation controller
I0717 17:26:30.047786 31817 addons.go:234] Setting addon default-storageclass=true in "ha-333994"
I0717 17:26:30.047815 31817 host.go:66] Checking if "ha-333994" exists ...
I0717 17:26:30.048048 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.048070 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.062004 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
I0717 17:26:30.062451 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.062948 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.062973 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.063274 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.063821 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.063852 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.064986 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
I0717 17:26:30.065414 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.066072 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.066093 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.066486 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.066685 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:30.068400 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:26:30.070565 31817 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0717 17:26:30.072061 31817 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0717 17:26:30.072111 31817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0717 17:26:30.072172 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:26:30.075414 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.075887 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:30.075945 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.076100 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:26:30.076283 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:26:30.076404 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:26:30.076550 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:26:30.080633 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
I0717 17:26:30.081042 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.081529 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.081553 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.081832 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.082004 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:30.083501 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:26:30.083712 31817 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0717 17:26:30.083728 31817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0717 17:26:30.083744 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:26:30.086186 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.086587 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:30.086610 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.086776 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:26:30.086954 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:26:30.087117 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:26:30.087256 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:26:30.228292 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0717 17:26:30.301671 31817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 17:26:30.365207 31817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0717 17:26:30.867357 31817 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0717 17:26:30.994695 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.994720 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.994814 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.994839 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.995019 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995032 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995042 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.995049 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.995083 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995094 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995102 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.995109 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.995113 31817 main.go:141] libmachine: (ha-333994) DBG | Closing plugin on server side
I0717 17:26:30.995338 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995354 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995425 31817 main.go:141] libmachine: (ha-333994) DBG | Closing plugin on server side
I0717 17:26:30.995442 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995454 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995583 31817 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
I0717 17:26:30.995597 31817 round_trippers.go:469] Request Headers:
I0717 17:26:30.995607 31817 round_trippers.go:473] Accept: application/json, */*
I0717 17:26:30.995615 31817 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0717 17:26:31.008616 31817 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0717 17:26:31.009189 31817 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0717 17:26:31.009203 31817 round_trippers.go:469] Request Headers:
I0717 17:26:31.009211 31817 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0717 17:26:31.009218 31817 round_trippers.go:473] Accept: application/json, */*
I0717 17:26:31.009222 31817 round_trippers.go:473] Content-Type: application/json
I0717 17:26:31.018362 31817 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0717 17:26:31.018530 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:31.018542 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:31.018820 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:31.018857 31817 main.go:141] libmachine: (ha-333994) DBG | Closing plugin on server side
I0717 17:26:31.018879 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:31.020620 31817 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0717 17:26:31.022095 31817 addons.go:510] duration metric: took 995.669545ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0717 17:26:31.022154 31817 start.go:246] waiting for cluster config update ...
I0717 17:26:31.022168 31817 start.go:255] writing updated cluster config ...
I0717 17:26:31.023733 31817 out.go:177]
I0717 17:26:31.025261 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:31.025354 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:26:31.027151 31817 out.go:177] * Starting "ha-333994-m02" control-plane node in "ha-333994" cluster
I0717 17:26:31.028468 31817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 17:26:31.028493 31817 cache.go:56] Caching tarball of preloaded images
I0717 17:26:31.028581 31817 preload.go:172] Found /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 17:26:31.028597 31817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
I0717 17:26:31.028681 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:26:31.028874 31817 start.go:360] acquireMachinesLock for ha-333994-m02: {Name:mk0f74b853b0d6e269bf0c6a25c6edeb4f1994c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 17:26:31.028940 31817 start.go:364] duration metric: took 41.632µs to acquireMachinesLock for "ha-333994-m02"
I0717 17:26:31.028968 31817 start.go:93] Provisioning new machine with config: &{Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tru
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 17:26:31.029076 31817 start.go:125] createHost starting for "m02" (driver="kvm2")
I0717 17:26:31.030724 31817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0717 17:26:31.030825 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:31.030857 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:31.044970 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
I0717 17:26:31.045405 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:31.045822 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:31.045844 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:31.046177 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:31.046354 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:31.046509 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:31.046649 31817 start.go:159] libmachine.API.Create for "ha-333994" (driver="kvm2")
I0717 17:26:31.046672 31817 client.go:168] LocalClient.Create starting
I0717 17:26:31.046708 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem
I0717 17:26:31.046743 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:26:31.046763 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:26:31.046824 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem
I0717 17:26:31.046847 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:26:31.046863 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:26:31.046888 31817 main.go:141] libmachine: Running pre-create checks...
I0717 17:26:31.046900 31817 main.go:141] libmachine: (ha-333994-m02) Calling .PreCreateCheck
I0717 17:26:31.047078 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetConfigRaw
I0717 17:26:31.047493 31817 main.go:141] libmachine: Creating machine...
I0717 17:26:31.047506 31817 main.go:141] libmachine: (ha-333994-m02) Calling .Create
I0717 17:26:31.047622 31817 main.go:141] libmachine: (ha-333994-m02) Creating KVM machine...
I0717 17:26:31.048765 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found existing default KVM network
I0717 17:26:31.048898 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found existing private KVM network mk-ha-333994
I0717 17:26:31.048996 31817 main.go:141] libmachine: (ha-333994-m02) Setting up store path in /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02 ...
I0717 17:26:31.049023 31817 main.go:141] libmachine: (ha-333994-m02) Building disk image from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
I0717 17:26:31.049102 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.048983 32198 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:26:31.049157 31817 main.go:141] libmachine: (ha-333994-m02) Downloading /home/jenkins/minikube-integration/19283-14409/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
I0717 17:26:31.264550 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.264392 32198 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa...
I0717 17:26:31.437178 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.437075 32198 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/ha-333994-m02.rawdisk...
I0717 17:26:31.437206 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Writing magic tar header
I0717 17:26:31.437216 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Writing SSH key tar header
I0717 17:26:31.437287 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.437231 32198 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02 ...
I0717 17:26:31.437381 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02
I0717 17:26:31.437404 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines
I0717 17:26:31.437414 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02 (perms=drwx------)
I0717 17:26:31.437427 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines (perms=drwxr-xr-x)
I0717 17:26:31.437434 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube (perms=drwxr-xr-x)
I0717 17:26:31.437446 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409 (perms=drwxrwxr-x)
I0717 17:26:31.437455 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0717 17:26:31.437469 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0717 17:26:31.437487 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:26:31.437496 31817 main.go:141] libmachine: (ha-333994-m02) Creating domain...
I0717 17:26:31.437506 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409
I0717 17:26:31.437514 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0717 17:26:31.437521 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins
I0717 17:26:31.437528 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home
I0717 17:26:31.437535 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Skipping /home - not owner
I0717 17:26:31.438521 31817 main.go:141] libmachine: (ha-333994-m02) define libvirt domain using xml:
I0717 17:26:31.438545 31817 main.go:141] libmachine: (ha-333994-m02) <domain type='kvm'>
I0717 17:26:31.438556 31817 main.go:141] libmachine: (ha-333994-m02) <name>ha-333994-m02</name>
I0717 17:26:31.438567 31817 main.go:141] libmachine: (ha-333994-m02) <memory unit='MiB'>2200</memory>
I0717 17:26:31.438579 31817 main.go:141] libmachine: (ha-333994-m02) <vcpu>2</vcpu>
I0717 17:26:31.438584 31817 main.go:141] libmachine: (ha-333994-m02) <features>
I0717 17:26:31.438589 31817 main.go:141] libmachine: (ha-333994-m02) <acpi/>
I0717 17:26:31.438593 31817 main.go:141] libmachine: (ha-333994-m02) <apic/>
I0717 17:26:31.438600 31817 main.go:141] libmachine: (ha-333994-m02) <pae/>
I0717 17:26:31.438604 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.438610 31817 main.go:141] libmachine: (ha-333994-m02) </features>
I0717 17:26:31.438614 31817 main.go:141] libmachine: (ha-333994-m02) <cpu mode='host-passthrough'>
I0717 17:26:31.438621 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.438628 31817 main.go:141] libmachine: (ha-333994-m02) </cpu>
I0717 17:26:31.438640 31817 main.go:141] libmachine: (ha-333994-m02) <os>
I0717 17:26:31.438654 31817 main.go:141] libmachine: (ha-333994-m02) <type>hvm</type>
I0717 17:26:31.438664 31817 main.go:141] libmachine: (ha-333994-m02) <boot dev='cdrom'/>
I0717 17:26:31.438671 31817 main.go:141] libmachine: (ha-333994-m02) <boot dev='hd'/>
I0717 17:26:31.438679 31817 main.go:141] libmachine: (ha-333994-m02) <bootmenu enable='no'/>
I0717 17:26:31.438683 31817 main.go:141] libmachine: (ha-333994-m02) </os>
I0717 17:26:31.438688 31817 main.go:141] libmachine: (ha-333994-m02) <devices>
I0717 17:26:31.438696 31817 main.go:141] libmachine: (ha-333994-m02) <disk type='file' device='cdrom'>
I0717 17:26:31.438705 31817 main.go:141] libmachine: (ha-333994-m02) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/boot2docker.iso'/>
I0717 17:26:31.438717 31817 main.go:141] libmachine: (ha-333994-m02) <target dev='hdc' bus='scsi'/>
I0717 17:26:31.438728 31817 main.go:141] libmachine: (ha-333994-m02) <readonly/>
I0717 17:26:31.438741 31817 main.go:141] libmachine: (ha-333994-m02) </disk>
I0717 17:26:31.438755 31817 main.go:141] libmachine: (ha-333994-m02) <disk type='file' device='disk'>
I0717 17:26:31.438807 31817 main.go:141] libmachine: (ha-333994-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0717 17:26:31.438833 31817 main.go:141] libmachine: (ha-333994-m02) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/ha-333994-m02.rawdisk'/>
I0717 17:26:31.438839 31817 main.go:141] libmachine: (ha-333994-m02) <target dev='hda' bus='virtio'/>
I0717 17:26:31.438845 31817 main.go:141] libmachine: (ha-333994-m02) </disk>
I0717 17:26:31.438850 31817 main.go:141] libmachine: (ha-333994-m02) <interface type='network'>
I0717 17:26:31.438856 31817 main.go:141] libmachine: (ha-333994-m02) <source network='mk-ha-333994'/>
I0717 17:26:31.438860 31817 main.go:141] libmachine: (ha-333994-m02) <model type='virtio'/>
I0717 17:26:31.438865 31817 main.go:141] libmachine: (ha-333994-m02) </interface>
I0717 17:26:31.438871 31817 main.go:141] libmachine: (ha-333994-m02) <interface type='network'>
I0717 17:26:31.438883 31817 main.go:141] libmachine: (ha-333994-m02) <source network='default'/>
I0717 17:26:31.438890 31817 main.go:141] libmachine: (ha-333994-m02) <model type='virtio'/>
I0717 17:26:31.438898 31817 main.go:141] libmachine: (ha-333994-m02) </interface>
I0717 17:26:31.438911 31817 main.go:141] libmachine: (ha-333994-m02) <serial type='pty'>
I0717 17:26:31.438923 31817 main.go:141] libmachine: (ha-333994-m02) <target port='0'/>
I0717 17:26:31.438931 31817 main.go:141] libmachine: (ha-333994-m02) </serial>
I0717 17:26:31.438942 31817 main.go:141] libmachine: (ha-333994-m02) <console type='pty'>
I0717 17:26:31.438953 31817 main.go:141] libmachine: (ha-333994-m02) <target type='serial' port='0'/>
I0717 17:26:31.438964 31817 main.go:141] libmachine: (ha-333994-m02) </console>
I0717 17:26:31.438974 31817 main.go:141] libmachine: (ha-333994-m02) <rng model='virtio'>
I0717 17:26:31.438987 31817 main.go:141] libmachine: (ha-333994-m02) <backend model='random'>/dev/random</backend>
I0717 17:26:31.438999 31817 main.go:141] libmachine: (ha-333994-m02) </rng>
I0717 17:26:31.439010 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.439021 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.439030 31817 main.go:141] libmachine: (ha-333994-m02) </devices>
I0717 17:26:31.439039 31817 main.go:141] libmachine: (ha-333994-m02) </domain>
I0717 17:26:31.439049 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.445546 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:e9:27:93 in network default
I0717 17:26:31.446057 31817 main.go:141] libmachine: (ha-333994-m02) Ensuring networks are active...
I0717 17:26:31.446081 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:31.446683 31817 main.go:141] libmachine: (ha-333994-m02) Ensuring network default is active
I0717 17:26:31.446957 31817 main.go:141] libmachine: (ha-333994-m02) Ensuring network mk-ha-333994 is active
I0717 17:26:31.447352 31817 main.go:141] libmachine: (ha-333994-m02) Getting domain xml...
I0717 17:26:31.447953 31817 main.go:141] libmachine: (ha-333994-m02) Creating domain...
I0717 17:26:32.668554 31817 main.go:141] libmachine: (ha-333994-m02) Waiting to get IP...
I0717 17:26:32.669421 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:32.669837 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:32.669869 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:32.669821 32198 retry.go:31] will retry after 265.908605ms: waiting for machine to come up
I0717 17:26:32.937392 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:32.937818 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:32.937841 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:32.937787 32198 retry.go:31] will retry after 263.816332ms: waiting for machine to come up
I0717 17:26:33.203484 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:33.203889 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:33.203915 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:33.203865 32198 retry.go:31] will retry after 370.046003ms: waiting for machine to come up
I0717 17:26:33.575157 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:33.575547 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:33.575577 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:33.575470 32198 retry.go:31] will retry after 487.691796ms: waiting for machine to come up
I0717 17:26:34.065171 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:34.065647 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:34.065668 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:34.065610 32198 retry.go:31] will retry after 737.756145ms: waiting for machine to come up
I0717 17:26:34.804469 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:34.804805 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:34.804833 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:34.804748 32198 retry.go:31] will retry after 716.008929ms: waiting for machine to come up
I0717 17:26:35.522742 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:35.523151 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:35.523175 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:35.523122 32198 retry.go:31] will retry after 1.039877882s: waiting for machine to come up
I0717 17:26:36.564784 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:36.565187 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:36.565236 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:36.565168 32198 retry.go:31] will retry after 946.347249ms: waiting for machine to come up
I0717 17:26:37.513629 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:37.514132 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:37.514159 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:37.514078 32198 retry.go:31] will retry after 1.425543571s: waiting for machine to come up
I0717 17:26:38.941439 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:38.941914 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:38.941941 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:38.941867 32198 retry.go:31] will retry after 2.252250366s: waiting for machine to come up
I0717 17:26:41.195297 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:41.195830 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:41.195853 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:41.195783 32198 retry.go:31] will retry after 2.725572397s: waiting for machine to come up
I0717 17:26:43.922616 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:43.923015 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:43.923039 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:43.922970 32198 retry.go:31] will retry after 3.508475549s: waiting for machine to come up
I0717 17:26:47.432839 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:47.433277 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:47.433306 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:47.433245 32198 retry.go:31] will retry after 3.328040591s: waiting for machine to come up
I0717 17:26:50.765649 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:50.766087 31817 main.go:141] libmachine: (ha-333994-m02) Found IP for machine: 192.168.39.127
I0717 17:26:50.766108 31817 main.go:141] libmachine: (ha-333994-m02) Reserving static IP address...
I0717 17:26:50.766147 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has current primary IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:50.766429 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find host DHCP lease matching {name: "ha-333994-m02", mac: "52:54:00:b1:0f:81", ip: "192.168.39.127"} in network mk-ha-333994
I0717 17:26:50.835843 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Getting to WaitForSSH function...
I0717 17:26:50.835875 31817 main.go:141] libmachine: (ha-333994-m02) Reserved static IP address: 192.168.39.127
I0717 17:26:50.835890 31817 main.go:141] libmachine: (ha-333994-m02) Waiting for SSH to be available...
I0717 17:26:50.838442 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:50.838833 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994
I0717 17:26:50.838858 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find defined IP address of network mk-ha-333994 interface with MAC address 52:54:00:b1:0f:81
I0717 17:26:50.839017 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH client type: external
I0717 17:26:50.839052 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa (-rw-------)
I0717 17:26:50.839081 31817 main.go:141] libmachine: (ha-333994-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0717 17:26:50.839104 31817 main.go:141] libmachine: (ha-333994-m02) DBG | About to run SSH command:
I0717 17:26:50.839121 31817 main.go:141] libmachine: (ha-333994-m02) DBG | exit 0
I0717 17:26:50.842964 31817 main.go:141] libmachine: (ha-333994-m02) DBG | SSH cmd err, output: exit status 255:
I0717 17:26:50.842984 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0717 17:26:50.842995 31817 main.go:141] libmachine: (ha-333994-m02) DBG | command : exit 0
I0717 17:26:50.843004 31817 main.go:141] libmachine: (ha-333994-m02) DBG | err : exit status 255
I0717 17:26:50.843028 31817 main.go:141] libmachine: (ha-333994-m02) DBG | output :
I0717 17:26:53.843162 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Getting to WaitForSSH function...
I0717 17:26:53.845524 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.845912 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:53.845964 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.846160 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH client type: external
I0717 17:26:53.846190 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa (-rw-------)
I0717 17:26:53.846218 31817 main.go:141] libmachine: (ha-333994-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0717 17:26:53.846237 31817 main.go:141] libmachine: (ha-333994-m02) DBG | About to run SSH command:
I0717 17:26:53.846249 31817 main.go:141] libmachine: (ha-333994-m02) DBG | exit 0
I0717 17:26:53.977891 31817 main.go:141] libmachine: (ha-333994-m02) DBG | SSH cmd err, output: <nil>:
I0717 17:26:53.978192 31817 main.go:141] libmachine: (ha-333994-m02) KVM machine creation complete!
I0717 17:26:53.978493 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetConfigRaw
I0717 17:26:53.979005 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:53.979196 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:53.979349 31817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0717 17:26:53.979361 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetState
I0717 17:26:53.980446 31817 main.go:141] libmachine: Detecting operating system of created instance...
I0717 17:26:53.980458 31817 main.go:141] libmachine: Waiting for SSH to be available...
I0717 17:26:53.980463 31817 main.go:141] libmachine: Getting to WaitForSSH function...
I0717 17:26:53.980469 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:53.982666 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.983028 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:53.983061 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.983193 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:53.983351 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:53.983482 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:53.983592 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:53.983736 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:53.983941 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:53.983953 31817 main.go:141] libmachine: About to run SSH command:
exit 0
I0717 17:26:54.097606 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:26:54.097631 31817 main.go:141] libmachine: Detecting the provisioner...
I0717 17:26:54.097638 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.100274 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.100592 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.100626 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.100772 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.100954 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.101115 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.101230 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.101387 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:54.101557 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:54.101569 31817 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0717 17:26:54.214758 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0717 17:26:54.214823 31817 main.go:141] libmachine: found compatible host: buildroot
I0717 17:26:54.214832 31817 main.go:141] libmachine: Provisioning with buildroot...
I0717 17:26:54.214839 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:54.215071 31817 buildroot.go:166] provisioning hostname "ha-333994-m02"
I0717 17:26:54.215095 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:54.215281 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.217709 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.218130 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.218157 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.218274 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.218456 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.218598 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.218743 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.218879 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:54.219074 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:54.219087 31817 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-333994-m02 && echo "ha-333994-m02" | sudo tee /etc/hostname
I0717 17:26:54.348717 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-333994-m02
I0717 17:26:54.348783 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.351584 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.351923 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.351944 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.352126 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.352288 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.352474 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.352599 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.352725 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:54.352881 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:54.352895 31817 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-333994-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-333994-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-333994-m02' | sudo tee -a /etc/hosts;
fi
fi
I0717 17:26:54.476331 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:26:54.476371 31817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14409/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14409/.minikube}
I0717 17:26:54.476397 31817 buildroot.go:174] setting up certificates
I0717 17:26:54.476416 31817 provision.go:84] configureAuth start
I0717 17:26:54.476438 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:54.476719 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:54.479208 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.479564 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.479592 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.479788 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.481800 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.482086 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.482109 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.482263 31817 provision.go:143] copyHostCerts
I0717 17:26:54.482290 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:26:54.482319 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem, removing ...
I0717 17:26:54.482328 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:26:54.482388 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem (1082 bytes)
I0717 17:26:54.482455 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:26:54.482472 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem, removing ...
I0717 17:26:54.482478 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:26:54.482502 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem (1123 bytes)
I0717 17:26:54.482542 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:26:54.482558 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem, removing ...
I0717 17:26:54.482564 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:26:54.482584 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem (1679 bytes)
I0717 17:26:54.482627 31817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem org=jenkins.ha-333994-m02 san=[127.0.0.1 192.168.39.127 ha-333994-m02 localhost minikube]
I0717 17:26:54.697157 31817 provision.go:177] copyRemoteCerts
I0717 17:26:54.697210 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 17:26:54.697233 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.699959 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.700263 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.700281 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.700480 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.700699 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.700860 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.701000 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
I0717 17:26:54.792678 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 17:26:54.792760 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0717 17:26:54.816985 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 17:26:54.817058 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0717 17:26:54.841268 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 17:26:54.841343 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 17:26:54.865093 31817 provision.go:87] duration metric: took 388.663223ms to configureAuth
I0717 17:26:54.865120 31817 buildroot.go:189] setting minikube options for container-runtime
I0717 17:26:54.865311 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:54.865337 31817 main.go:141] libmachine: Checking connection to Docker...
I0717 17:26:54.865347 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetURL
I0717 17:26:54.866495 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using libvirt version 6000000
I0717 17:26:54.868417 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.868765 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.868792 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.868933 31817 main.go:141] libmachine: Docker is up and running!
I0717 17:26:54.868949 31817 main.go:141] libmachine: Reticulating splines...
I0717 17:26:54.868955 31817 client.go:171] duration metric: took 23.822273283s to LocalClient.Create
I0717 17:26:54.868974 31817 start.go:167] duration metric: took 23.822329608s to libmachine.API.Create "ha-333994"
I0717 17:26:54.868982 31817 start.go:293] postStartSetup for "ha-333994-m02" (driver="kvm2")
I0717 17:26:54.868990 31817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 17:26:54.869011 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:54.869243 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 17:26:54.869264 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.871450 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.871816 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.871840 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.872022 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.872180 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.872326 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.872476 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
I0717 17:26:54.961235 31817 ssh_runner.go:195] Run: cat /etc/os-release
I0717 17:26:54.965604 31817 info.go:137] Remote host: Buildroot 2023.02.9
I0717 17:26:54.965626 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/addons for local assets ...
I0717 17:26:54.965684 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/files for local assets ...
I0717 17:26:54.965757 31817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> 216612.pem in /etc/ssl/certs
I0717 17:26:54.965766 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /etc/ssl/certs/216612.pem
I0717 17:26:54.965847 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 17:26:54.975595 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:26:54.999236 31817 start.go:296] duration metric: took 130.241349ms for postStartSetup
I0717 17:26:54.999289 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetConfigRaw
I0717 17:26:54.999814 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:55.002512 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.002864 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.002901 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.003161 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:26:55.003366 31817 start.go:128] duration metric: took 23.974275382s to createHost
I0717 17:26:55.003388 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:55.005328 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.005632 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.005656 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.005830 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:55.006002 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.006161 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.006292 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:55.006451 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:55.006637 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:55.006649 31817 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0717 17:26:55.122903 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237215.098211807
I0717 17:26:55.122928 31817 fix.go:216] guest clock: 1721237215.098211807
I0717 17:26:55.122937 31817 fix.go:229] Guest: 2024-07-17 17:26:55.098211807 +0000 UTC Remote: 2024-07-17 17:26:55.003376883 +0000 UTC m=+77.663313056 (delta=94.834924ms)
I0717 17:26:55.122956 31817 fix.go:200] guest clock delta is within tolerance: 94.834924ms
I0717 17:26:55.122962 31817 start.go:83] releasing machines lock for "ha-333994-m02", held for 24.094009758s
I0717 17:26:55.122986 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.123244 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:55.125631 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.125927 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.125955 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.128661 31817 out.go:177] * Found network options:
I0717 17:26:55.130349 31817 out.go:177] - NO_PROXY=192.168.39.180
W0717 17:26:55.131717 31817 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 17:26:55.131742 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.132304 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.132476 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.132554 31817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 17:26:55.132594 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
W0717 17:26:55.132666 31817 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 17:26:55.132744 31817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 17:26:55.132772 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:55.135185 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135477 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.135501 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135519 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135642 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:55.135817 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.135976 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.135995 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135977 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:55.136127 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:55.136190 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
I0717 17:26:55.136268 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.136402 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:55.136527 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
W0717 17:26:55.220815 31817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 17:26:55.220875 31817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 17:26:55.245507 31817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 17:26:55.245531 31817 start.go:495] detecting cgroup driver to use...
I0717 17:26:55.245596 31817 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 17:26:55.278918 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 17:26:55.292940 31817 docker.go:217] disabling cri-docker service (if available) ...
I0717 17:26:55.293020 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 17:26:55.306646 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 17:26:55.321727 31817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 17:26:55.453026 31817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 17:26:55.618252 31817 docker.go:233] disabling docker service ...
I0717 17:26:55.618323 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 17:26:55.633535 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 17:26:55.647399 31817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 17:26:55.767544 31817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 17:26:55.888191 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 17:26:55.901625 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 17:26:55.919869 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 17:26:55.930472 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 17:26:55.940635 31817 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 17:26:55.940681 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 17:26:55.950966 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:26:55.961459 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 17:26:55.972051 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:26:55.983017 31817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 17:26:55.993746 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 17:26:56.004081 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 17:26:56.014291 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 17:26:56.024660 31817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 17:26:56.033932 31817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0717 17:26:56.033978 31817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0717 17:26:56.047409 31817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 17:26:56.057123 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:26:56.196097 31817 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 17:26:56.227087 31817 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 17:26:56.227147 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:26:56.232659 31817 retry.go:31] will retry after 933.236719ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0717 17:26:57.166776 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:26:57.172003 31817 start.go:563] Will wait 60s for crictl version
I0717 17:26:57.172071 31817 ssh_runner.go:195] Run: which crictl
I0717 17:26:57.176036 31817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 17:26:57.214182 31817 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.19
RuntimeApiVersion: v1
I0717 17:26:57.214259 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:57.239883 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:57.270199 31817 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
I0717 17:26:57.271461 31817 out.go:177] - env NO_PROXY=192.168.39.180
I0717 17:26:57.272522 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:57.274799 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:57.275154 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:57.275183 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:57.275351 31817 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0717 17:26:57.279650 31817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 17:26:57.293824 31817 mustload.go:65] Loading cluster: ha-333994
I0717 17:26:57.294006 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:57.294269 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:57.294293 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:57.308598 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
I0717 17:26:57.309000 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:57.309480 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:57.309502 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:57.309752 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:57.309903 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:57.311534 31817 host.go:66] Checking if "ha-333994" exists ...
I0717 17:26:57.311828 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:57.311870 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:57.326228 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
I0717 17:26:57.326552 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:57.327001 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:57.327022 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:57.327287 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:57.327462 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:26:57.327619 31817 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994 for IP: 192.168.39.127
I0717 17:26:57.327627 31817 certs.go:194] generating shared ca certs ...
I0717 17:26:57.327639 31817 certs.go:226] acquiring lock for ca certs: {Name:mkbd59c659d87951ff3ee355cd9afc07084cc973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:57.327753 31817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key
I0717 17:26:57.327802 31817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key
I0717 17:26:57.327812 31817 certs.go:256] generating profile certs ...
I0717 17:26:57.327877 31817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key
I0717 17:26:57.327900 31817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff
I0717 17:26:57.327913 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180 192.168.39.127 192.168.39.254]
I0717 17:26:57.458239 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff ...
I0717 17:26:57.458268 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff: {Name:mke87290a04a64b5c9a3f70eca7bbd7f3ab62e57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:57.458428 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff ...
I0717 17:26:57.458440 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff: {Name:mkcd9a6c319770e7232a22dd759a83106e261b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:57.458506 31817 certs.go:381] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt
I0717 17:26:57.458644 31817 certs.go:385] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key
I0717 17:26:57.458768 31817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key
I0717 17:26:57.458782 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0717 17:26:57.458794 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0717 17:26:57.458806 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0717 17:26:57.458818 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0717 17:26:57.458830 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0717 17:26:57.458841 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0717 17:26:57.458852 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0717 17:26:57.458865 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0717 17:26:57.458910 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem (1338 bytes)
W0717 17:26:57.458936 31817 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661_empty.pem, impossibly tiny 0 bytes
I0717 17:26:57.458945 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem (1679 bytes)
I0717 17:26:57.458966 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem (1082 bytes)
I0717 17:26:57.458986 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem (1123 bytes)
I0717 17:26:57.459013 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem (1679 bytes)
I0717 17:26:57.459048 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:26:57.459071 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /usr/share/ca-certificates/216612.pem
I0717 17:26:57.459084 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:57.459095 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem -> /usr/share/ca-certificates/21661.pem
I0717 17:26:57.459124 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:26:57.461994 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:57.462403 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:57.462430 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:57.462587 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:26:57.462744 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:26:57.462905 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:26:57.462996 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:26:57.538412 31817 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I0717 17:26:57.543898 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0717 17:26:57.556474 31817 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I0717 17:26:57.560660 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0717 17:26:57.570923 31817 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I0717 17:26:57.574879 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0717 17:26:57.585092 31817 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I0717 17:26:57.589304 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I0717 17:26:57.599639 31817 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I0717 17:26:57.603878 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0717 17:26:57.616227 31817 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I0717 17:26:57.620350 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0717 17:26:57.632125 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 17:26:57.657494 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0717 17:26:57.682754 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 17:26:57.707851 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 17:26:57.731860 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0717 17:26:57.757707 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0717 17:26:57.781205 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 17:26:57.804275 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 17:26:57.829670 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /usr/share/ca-certificates/216612.pem (1708 bytes)
I0717 17:26:57.855063 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 17:26:57.881215 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem --> /usr/share/ca-certificates/21661.pem (1338 bytes)
I0717 17:26:57.906393 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0717 17:26:57.924441 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0717 17:26:57.942446 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0717 17:26:57.958731 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I0717 17:26:57.974971 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0717 17:26:57.991007 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0717 17:26:58.006856 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0717 17:26:58.023616 31817 ssh_runner.go:195] Run: openssl version
I0717 17:26:58.029309 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216612.pem && ln -fs /usr/share/ca-certificates/216612.pem /etc/ssl/certs/216612.pem"
I0717 17:26:58.040022 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216612.pem
I0717 17:26:58.044627 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:21 /usr/share/ca-certificates/216612.pem
I0717 17:26:58.044684 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216612.pem
I0717 17:26:58.050556 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/216612.pem /etc/ssl/certs/3ec20f2e.0"
I0717 17:26:58.060921 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 17:26:58.071585 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:58.075832 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:13 /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:58.075882 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:58.081281 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 17:26:58.091769 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21661.pem && ln -fs /usr/share/ca-certificates/21661.pem /etc/ssl/certs/21661.pem"
I0717 17:26:58.102180 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21661.pem
I0717 17:26:58.106524 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:21 /usr/share/ca-certificates/21661.pem
I0717 17:26:58.106575 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21661.pem
I0717 17:26:58.112063 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21661.pem /etc/ssl/certs/51391683.0"
I0717 17:26:58.122675 31817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 17:26:58.126524 31817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0717 17:26:58.126576 31817 kubeadm.go:934] updating node {m02 192.168.39.127 8443 v1.30.2 containerd true true} ...
I0717 17:26:58.126678 31817 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-333994-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 17:26:58.126707 31817 kube-vip.go:115] generating kube-vip config ...
I0717 17:26:58.126735 31817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0717 17:26:58.143233 31817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0717 17:26:58.143291 31817 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0717 17:26:58.143334 31817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 17:26:58.153157 31817 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
Initiating transfer...
I0717 17:26:58.153211 31817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
I0717 17:26:58.162734 31817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
I0717 17:26:58.162759 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
I0717 17:26:58.162833 31817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl
I0717 17:26:58.162840 31817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubelet
I0717 17:26:58.162877 31817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubeadm
I0717 17:26:58.167096 31817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
I0717 17:26:58.167122 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
I0717 17:27:14.120624 31817 out.go:177]
W0717 17:27:14.122586 31817 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920] Decompressors:map[bz2:0xc000883490 gz:0xc000883498 tar:0xc000883440 tar.bz2:0xc000883450 tar.gz:0xc000883460 tar.xz:0xc000883470 tar.zst:0xc000883480 tbz2:0xc000883450 tgz:0xc000883460 txz:0xc000883470 tzst:0xc000883480 xz:0xc0008834a0 zip:0xc0008834b0 zst:0xc0008834a8] Getters:map[file:0xc000691350 http:0x
c0009febe0 https:0xc0009fec30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.194.0.2:36556->151.101.193.55:443: read: connection reset by peer
X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920] Decompressors:map[bz2:0xc000883490 gz:0xc000883498 tar:0xc000883440 tar.bz2:0xc000883450 tar.gz:0xc000883460 tar.xz:0xc000883470 tar.zst:0xc000883480 tbz2:0xc000883450 tgz:0xc000883460 txz:0xc000883470 tzst:0xc000883480 xz:0xc0008834a0 zip:0xc0008834b0 zst:0xc0008834a8] Getters:map[file:0xc000691350 http:0xc0009febe0 https:0xc0009fec30] Dir:false P
rogressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.194.0.2:36556->151.101.193.55:443: read: connection reset by peer
W0717 17:27:14.122605 31817 out.go:239] *
*
W0717 17:27:14.123461 31817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 17:27:14.125013 31817 out.go:177]
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 start -p ha-333994 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p ha-333994 -n ha-333994
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p ha-333994 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-333994 logs -n 25: (1.213385186s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs:
-- stdout --
==> Audit <==
|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| image | functional-142583 image ls | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| ssh | functional-142583 ssh findmnt | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | -T /mount1 | | | | | |
| image | functional-142583 image load | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| ssh | functional-142583 ssh findmnt | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | -T /mount2 | | | | | |
| ssh | functional-142583 ssh findmnt | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | -T /mount3 | | | | | |
| mount | -p functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | |
| | --kill=true | | | | | |
| addons | functional-142583 addons list | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| addons | functional-142583 addons list | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | -o json | | | | | |
| ssh | functional-142583 ssh echo | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | hello | | | | | |
| image | functional-142583 image ls | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| ssh | functional-142583 ssh cat | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | /etc/hostname | | | | | |
| image | functional-142583 image save --daemon | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:24 UTC | 17 Jul 24 17:24 UTC |
| | docker.io/kicbase/echo-server:functional-142583 | | | | | |
| | --alsologtostderr | | | | | |
| update-context | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| service | functional-142583 service | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | hello-node-connect --url | | | | | |
| image | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | image ls --format short | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | image ls --format yaml | | | | | |
| | --alsologtostderr | | | | | |
| ssh | functional-142583 ssh pgrep | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | |
| | buildkitd | | | | | |
| image | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | image ls --format json | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-142583 image build -t | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | localhost/my-image:functional-142583 | | | | | |
| | testdata/build --alsologtostderr | | | | | |
| image | functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| | image ls --format table | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-142583 image ls | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| delete | -p functional-142583 | functional-142583 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | 17 Jul 24 17:25 UTC |
| start | -p ha-333994 --wait=true | ha-333994 | jenkins | v1.33.1 | 17 Jul 24 17:25 UTC | |
| | --memory=2200 --ha | | | | | |
| | -v=7 --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/17 17:25:37
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 17:25:37.372173 31817 out.go:291] Setting OutFile to fd 1 ...
I0717 17:25:37.372300 31817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:25:37.372309 31817 out.go:304] Setting ErrFile to fd 2...
I0717 17:25:37.372316 31817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:25:37.372515 31817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14409/.minikube/bin
I0717 17:25:37.373068 31817 out.go:298] Setting JSON to false
I0717 17:25:37.373934 31817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4080,"bootTime":1721233057,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0717 17:25:37.373990 31817 start.go:139] virtualization: kvm guest
I0717 17:25:37.376261 31817 out.go:177] * [ha-333994] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0717 17:25:37.377830 31817 notify.go:220] Checking for updates...
I0717 17:25:37.377854 31817 out.go:177] - MINIKUBE_LOCATION=19283
I0717 17:25:37.379322 31817 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 17:25:37.380779 31817 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19283-14409/kubeconfig
I0717 17:25:37.382329 31817 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:25:37.383666 31817 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0717 17:25:37.384940 31817 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 17:25:37.386314 31817 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 17:25:37.420051 31817 out.go:177] * Using the kvm2 driver based on user configuration
I0717 17:25:37.421589 31817 start.go:297] selected driver: kvm2
I0717 17:25:37.421607 31817 start.go:901] validating driver "kvm2" against <nil>
I0717 17:25:37.421618 31817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 17:25:37.422327 31817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 17:25:37.422404 31817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14409/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0717 17:25:37.437115 31817 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0717 17:25:37.437156 31817 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0717 17:25:37.437363 31817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 17:25:37.437413 31817 cni.go:84] Creating CNI manager for ""
I0717 17:25:37.437423 31817 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0717 17:25:37.437432 31817 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0717 17:25:37.437478 31817 start.go:340] cluster config:
{Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 17:25:37.437562 31817 iso.go:125] acquiring lock: {Name:mk9ca422a70055a342d5e4afb354786e16c8e9d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 17:25:37.439313 31817 out.go:177] * Starting "ha-333994" primary control-plane node in "ha-333994" cluster
I0717 17:25:37.440697 31817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 17:25:37.440738 31817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4
I0717 17:25:37.440745 31817 cache.go:56] Caching tarball of preloaded images
I0717 17:25:37.440816 31817 preload.go:172] Found /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 17:25:37.440827 31817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
I0717 17:25:37.441104 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:25:37.441121 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json: {Name:mk758d67ae5c79043a711460bac8ff59da52dd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:25:37.441235 31817 start.go:360] acquireMachinesLock for ha-333994: {Name:mk0f74b853b0d6e269bf0c6a25c6edeb4f1994c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 17:25:37.441263 31817 start.go:364] duration metric: took 16.553µs to acquireMachinesLock for "ha-333994"
I0717 17:25:37.441278 31817 start.go:93] Provisioning new machine with config: &{Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 17:25:37.441331 31817 start.go:125] createHost starting for "" (driver="kvm2")
I0717 17:25:37.442904 31817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0717 17:25:37.443026 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:25:37.443066 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:25:37.456958 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
I0717 17:25:37.457401 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:25:37.457924 31817 main.go:141] libmachine: Using API Version 1
I0717 17:25:37.457953 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:25:37.458234 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:25:37.458399 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:37.458508 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:37.458638 31817 start.go:159] libmachine.API.Create for "ha-333994" (driver="kvm2")
I0717 17:25:37.458664 31817 client.go:168] LocalClient.Create starting
I0717 17:25:37.458690 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem
I0717 17:25:37.458718 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:25:37.458731 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:25:37.458776 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem
I0717 17:25:37.458792 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:25:37.458803 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:25:37.458817 31817 main.go:141] libmachine: Running pre-create checks...
I0717 17:25:37.458825 31817 main.go:141] libmachine: (ha-333994) Calling .PreCreateCheck
I0717 17:25:37.459073 31817 main.go:141] libmachine: (ha-333994) Calling .GetConfigRaw
I0717 17:25:37.459495 31817 main.go:141] libmachine: Creating machine...
I0717 17:25:37.459514 31817 main.go:141] libmachine: (ha-333994) Calling .Create
I0717 17:25:37.459622 31817 main.go:141] libmachine: (ha-333994) Creating KVM machine...
I0717 17:25:37.460734 31817 main.go:141] libmachine: (ha-333994) DBG | found existing default KVM network
I0717 17:25:37.461376 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:37.461245 31840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
I0717 17:25:37.461396 31817 main.go:141] libmachine: (ha-333994) DBG | created network xml:
I0717 17:25:37.461405 31817 main.go:141] libmachine: (ha-333994) DBG | <network>
I0717 17:25:37.461410 31817 main.go:141] libmachine: (ha-333994) DBG | <name>mk-ha-333994</name>
I0717 17:25:37.461416 31817 main.go:141] libmachine: (ha-333994) DBG | <dns enable='no'/>
I0717 17:25:37.461420 31817 main.go:141] libmachine: (ha-333994) DBG |
I0717 17:25:37.461438 31817 main.go:141] libmachine: (ha-333994) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0717 17:25:37.461448 31817 main.go:141] libmachine: (ha-333994) DBG | <dhcp>
I0717 17:25:37.461459 31817 main.go:141] libmachine: (ha-333994) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0717 17:25:37.461473 31817 main.go:141] libmachine: (ha-333994) DBG | </dhcp>
I0717 17:25:37.461490 31817 main.go:141] libmachine: (ha-333994) DBG | </ip>
I0717 17:25:37.461499 31817 main.go:141] libmachine: (ha-333994) DBG |
I0717 17:25:37.461508 31817 main.go:141] libmachine: (ha-333994) DBG | </network>
I0717 17:25:37.461513 31817 main.go:141] libmachine: (ha-333994) DBG |
I0717 17:25:37.467087 31817 main.go:141] libmachine: (ha-333994) DBG | trying to create private KVM network mk-ha-333994 192.168.39.0/24...
I0717 17:25:37.530969 31817 main.go:141] libmachine: (ha-333994) DBG | private KVM network mk-ha-333994 192.168.39.0/24 created
I0717 17:25:37.531012 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:37.530957 31840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:25:37.531029 31817 main.go:141] libmachine: (ha-333994) Setting up store path in /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994 ...
I0717 17:25:37.531050 31817 main.go:141] libmachine: (ha-333994) Building disk image from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
I0717 17:25:37.531153 31817 main.go:141] libmachine: (ha-333994) Downloading /home/jenkins/minikube-integration/19283-14409/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
I0717 17:25:37.769775 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:37.769643 31840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa...
I0717 17:25:38.127523 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:38.127394 31840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/ha-333994.rawdisk...
I0717 17:25:38.127548 31817 main.go:141] libmachine: (ha-333994) DBG | Writing magic tar header
I0717 17:25:38.127558 31817 main.go:141] libmachine: (ha-333994) DBG | Writing SSH key tar header
I0717 17:25:38.127566 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:38.127499 31840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994 ...
I0717 17:25:38.127579 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994
I0717 17:25:38.127621 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994 (perms=drwx------)
I0717 17:25:38.127638 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines (perms=drwxr-xr-x)
I0717 17:25:38.127649 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube (perms=drwxr-xr-x)
I0717 17:25:38.127659 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409 (perms=drwxrwxr-x)
I0717 17:25:38.127674 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0717 17:25:38.127685 31817 main.go:141] libmachine: (ha-333994) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0717 17:25:38.127697 31817 main.go:141] libmachine: (ha-333994) Creating domain...
I0717 17:25:38.127708 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines
I0717 17:25:38.127720 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:25:38.127729 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409
I0717 17:25:38.127736 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0717 17:25:38.127763 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home/jenkins
I0717 17:25:38.127774 31817 main.go:141] libmachine: (ha-333994) DBG | Checking permissions on dir: /home
I0717 17:25:38.127787 31817 main.go:141] libmachine: (ha-333994) DBG | Skipping /home - not owner
I0717 17:25:38.128688 31817 main.go:141] libmachine: (ha-333994) define libvirt domain using xml:
I0717 17:25:38.128706 31817 main.go:141] libmachine: (ha-333994) <domain type='kvm'>
I0717 17:25:38.128716 31817 main.go:141] libmachine: (ha-333994) <name>ha-333994</name>
I0717 17:25:38.128724 31817 main.go:141] libmachine: (ha-333994) <memory unit='MiB'>2200</memory>
I0717 17:25:38.128733 31817 main.go:141] libmachine: (ha-333994) <vcpu>2</vcpu>
I0717 17:25:38.128743 31817 main.go:141] libmachine: (ha-333994) <features>
I0717 17:25:38.128752 31817 main.go:141] libmachine: (ha-333994) <acpi/>
I0717 17:25:38.128758 31817 main.go:141] libmachine: (ha-333994) <apic/>
I0717 17:25:38.128768 31817 main.go:141] libmachine: (ha-333994) <pae/>
I0717 17:25:38.128788 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.128800 31817 main.go:141] libmachine: (ha-333994) </features>
I0717 17:25:38.128818 31817 main.go:141] libmachine: (ha-333994) <cpu mode='host-passthrough'>
I0717 17:25:38.128833 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.128844 31817 main.go:141] libmachine: (ha-333994) </cpu>
I0717 17:25:38.128854 31817 main.go:141] libmachine: (ha-333994) <os>
I0717 17:25:38.128867 31817 main.go:141] libmachine: (ha-333994) <type>hvm</type>
I0717 17:25:38.128878 31817 main.go:141] libmachine: (ha-333994) <boot dev='cdrom'/>
I0717 17:25:38.128890 31817 main.go:141] libmachine: (ha-333994) <boot dev='hd'/>
I0717 17:25:38.128901 31817 main.go:141] libmachine: (ha-333994) <bootmenu enable='no'/>
I0717 17:25:38.128927 31817 main.go:141] libmachine: (ha-333994) </os>
I0717 17:25:38.128949 31817 main.go:141] libmachine: (ha-333994) <devices>
I0717 17:25:38.128960 31817 main.go:141] libmachine: (ha-333994) <disk type='file' device='cdrom'>
I0717 17:25:38.128974 31817 main.go:141] libmachine: (ha-333994) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/boot2docker.iso'/>
I0717 17:25:38.128988 31817 main.go:141] libmachine: (ha-333994) <target dev='hdc' bus='scsi'/>
I0717 17:25:38.128998 31817 main.go:141] libmachine: (ha-333994) <readonly/>
I0717 17:25:38.129007 31817 main.go:141] libmachine: (ha-333994) </disk>
I0717 17:25:38.129031 31817 main.go:141] libmachine: (ha-333994) <disk type='file' device='disk'>
I0717 17:25:38.129043 31817 main.go:141] libmachine: (ha-333994) <driver name='qemu' type='raw' cache='default' io='threads' />
I0717 17:25:38.129057 31817 main.go:141] libmachine: (ha-333994) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/ha-333994.rawdisk'/>
I0717 17:25:38.129067 31817 main.go:141] libmachine: (ha-333994) <target dev='hda' bus='virtio'/>
I0717 17:25:38.129079 31817 main.go:141] libmachine: (ha-333994) </disk>
I0717 17:25:38.129089 31817 main.go:141] libmachine: (ha-333994) <interface type='network'>
I0717 17:25:38.129098 31817 main.go:141] libmachine: (ha-333994) <source network='mk-ha-333994'/>
I0717 17:25:38.129109 31817 main.go:141] libmachine: (ha-333994) <model type='virtio'/>
I0717 17:25:38.129125 31817 main.go:141] libmachine: (ha-333994) </interface>
I0717 17:25:38.129143 31817 main.go:141] libmachine: (ha-333994) <interface type='network'>
I0717 17:25:38.129156 31817 main.go:141] libmachine: (ha-333994) <source network='default'/>
I0717 17:25:38.129166 31817 main.go:141] libmachine: (ha-333994) <model type='virtio'/>
I0717 17:25:38.129177 31817 main.go:141] libmachine: (ha-333994) </interface>
I0717 17:25:38.129185 31817 main.go:141] libmachine: (ha-333994) <serial type='pty'>
I0717 17:25:38.129197 31817 main.go:141] libmachine: (ha-333994) <target port='0'/>
I0717 17:25:38.129212 31817 main.go:141] libmachine: (ha-333994) </serial>
I0717 17:25:38.129237 31817 main.go:141] libmachine: (ha-333994) <console type='pty'>
I0717 17:25:38.129257 31817 main.go:141] libmachine: (ha-333994) <target type='serial' port='0'/>
I0717 17:25:38.129277 31817 main.go:141] libmachine: (ha-333994) </console>
I0717 17:25:38.129288 31817 main.go:141] libmachine: (ha-333994) <rng model='virtio'>
I0717 17:25:38.129301 31817 main.go:141] libmachine: (ha-333994) <backend model='random'>/dev/random</backend>
I0717 17:25:38.129310 31817 main.go:141] libmachine: (ha-333994) </rng>
I0717 17:25:38.129321 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.129333 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.129343 31817 main.go:141] libmachine: (ha-333994) </devices>
I0717 17:25:38.129353 31817 main.go:141] libmachine: (ha-333994) </domain>
I0717 17:25:38.129364 31817 main.go:141] libmachine: (ha-333994)
I0717 17:25:38.133746 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:7d:ea:ab in network default
I0717 17:25:38.134333 31817 main.go:141] libmachine: (ha-333994) Ensuring networks are active...
I0717 17:25:38.134354 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:38.134949 31817 main.go:141] libmachine: (ha-333994) Ensuring network default is active
I0717 17:25:38.135204 31817 main.go:141] libmachine: (ha-333994) Ensuring network mk-ha-333994 is active
I0717 17:25:38.135633 31817 main.go:141] libmachine: (ha-333994) Getting domain xml...
I0717 17:25:38.136245 31817 main.go:141] libmachine: (ha-333994) Creating domain...
I0717 17:25:39.310815 31817 main.go:141] libmachine: (ha-333994) Waiting to get IP...
I0717 17:25:39.311620 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:39.312037 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:39.312090 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:39.312036 31840 retry.go:31] will retry after 308.80623ms: waiting for machine to come up
I0717 17:25:39.622682 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:39.623065 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:39.623083 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:39.623047 31840 retry.go:31] will retry after 344.848861ms: waiting for machine to come up
I0717 17:25:39.969533 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:39.969924 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:39.969950 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:39.969868 31840 retry.go:31] will retry after 339.149265ms: waiting for machine to come up
I0717 17:25:40.310470 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:40.310889 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:40.310915 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:40.310855 31840 retry.go:31] will retry after 442.455692ms: waiting for machine to come up
I0717 17:25:40.754326 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:40.754769 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:40.754793 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:40.754727 31840 retry.go:31] will retry after 692.369602ms: waiting for machine to come up
I0717 17:25:41.448430 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:41.448821 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:41.448845 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:41.448784 31840 retry.go:31] will retry after 888.634073ms: waiting for machine to come up
I0717 17:25:42.338562 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:42.338956 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:42.338987 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:42.338917 31840 retry.go:31] will retry after 958.652231ms: waiting for machine to come up
I0717 17:25:43.299646 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:43.300036 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:43.300060 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:43.299996 31840 retry.go:31] will retry after 1.026520774s: waiting for machine to come up
I0717 17:25:44.328045 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:44.328353 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:44.328378 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:44.328319 31840 retry.go:31] will retry after 1.144606861s: waiting for machine to come up
I0717 17:25:45.474485 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:45.474883 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:45.474908 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:45.474852 31840 retry.go:31] will retry after 2.320040547s: waiting for machine to come up
I0717 17:25:47.796771 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:47.797227 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:47.797257 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:47.797189 31840 retry.go:31] will retry after 2.900412309s: waiting for machine to come up
I0717 17:25:50.701258 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:50.701734 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:50.701785 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:50.701700 31840 retry.go:31] will retry after 2.901702791s: waiting for machine to come up
I0717 17:25:53.605129 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:53.605559 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find current IP address of domain ha-333994 in network mk-ha-333994
I0717 17:25:53.605577 31817 main.go:141] libmachine: (ha-333994) DBG | I0717 17:25:53.605522 31840 retry.go:31] will retry after 3.63399522s: waiting for machine to come up
I0717 17:25:57.240563 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.240970 31817 main.go:141] libmachine: (ha-333994) Found IP for machine: 192.168.39.180
I0717 17:25:57.241006 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has current primary IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.241016 31817 main.go:141] libmachine: (ha-333994) Reserving static IP address...
I0717 17:25:57.241422 31817 main.go:141] libmachine: (ha-333994) DBG | unable to find host DHCP lease matching {name: "ha-333994", mac: "52:54:00:73:4b:68", ip: "192.168.39.180"} in network mk-ha-333994
I0717 17:25:57.311172 31817 main.go:141] libmachine: (ha-333994) DBG | Getting to WaitForSSH function...
I0717 17:25:57.311209 31817 main.go:141] libmachine: (ha-333994) Reserved static IP address: 192.168.39.180
I0717 17:25:57.311222 31817 main.go:141] libmachine: (ha-333994) Waiting for SSH to be available...
I0717 17:25:57.313438 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.313869 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.313914 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.313935 31817 main.go:141] libmachine: (ha-333994) DBG | Using SSH client type: external
I0717 17:25:57.313972 31817 main.go:141] libmachine: (ha-333994) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa (-rw-------)
I0717 17:25:57.314013 31817 main.go:141] libmachine: (ha-333994) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa -p 22] /usr/bin/ssh <nil>}
I0717 17:25:57.314051 31817 main.go:141] libmachine: (ha-333994) DBG | About to run SSH command:
I0717 17:25:57.314064 31817 main.go:141] libmachine: (ha-333994) DBG | exit 0
I0717 17:25:57.442005 31817 main.go:141] libmachine: (ha-333994) DBG | SSH cmd err, output: <nil>:
I0717 17:25:57.442249 31817 main.go:141] libmachine: (ha-333994) KVM machine creation complete!
I0717 17:25:57.442580 31817 main.go:141] libmachine: (ha-333994) Calling .GetConfigRaw
I0717 17:25:57.443082 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:57.443285 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:57.443431 31817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0717 17:25:57.443445 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:25:57.444683 31817 main.go:141] libmachine: Detecting operating system of created instance...
I0717 17:25:57.444702 31817 main.go:141] libmachine: Waiting for SSH to be available...
I0717 17:25:57.444710 31817 main.go:141] libmachine: Getting to WaitForSSH function...
I0717 17:25:57.444718 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.446779 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.447118 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.447145 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.447285 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.447420 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.447569 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.447686 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.447850 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.448075 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.448086 31817 main.go:141] libmachine: About to run SSH command:
exit 0
I0717 17:25:57.561413 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:25:57.561435 31817 main.go:141] libmachine: Detecting the provisioner...
I0717 17:25:57.561444 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.564006 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.564331 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.564353 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.564530 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.564739 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.564886 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.565046 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.565213 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.565388 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.565402 31817 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0717 17:25:57.678978 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0717 17:25:57.679062 31817 main.go:141] libmachine: found compatible host: buildroot
I0717 17:25:57.679075 31817 main.go:141] libmachine: Provisioning with buildroot...
I0717 17:25:57.679085 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:57.679397 31817 buildroot.go:166] provisioning hostname "ha-333994"
I0717 17:25:57.679418 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:57.679587 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.682101 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.682468 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.682497 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.682625 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.682902 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.683088 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.683236 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.683384 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.683567 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.683582 31817 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-333994 && echo "ha-333994" | sudo tee /etc/hostname
I0717 17:25:57.808613 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-333994
I0717 17:25:57.808643 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.811150 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.811462 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.811484 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.811633 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:57.811819 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.811975 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:57.812114 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:57.812259 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:57.812470 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:57.812492 31817 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-333994' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-333994/g' /etc/hosts;
else
echo '127.0.1.1 ha-333994' | sudo tee -a /etc/hosts;
fi
fi
I0717 17:25:57.935982 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:25:57.936010 31817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14409/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14409/.minikube}
I0717 17:25:57.936045 31817 buildroot.go:174] setting up certificates
I0717 17:25:57.936053 31817 provision.go:84] configureAuth start
I0717 17:25:57.936064 31817 main.go:141] libmachine: (ha-333994) Calling .GetMachineName
I0717 17:25:57.936323 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:25:57.938795 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.939097 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.939122 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.939256 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:57.941132 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.941439 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:57.941465 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:57.941555 31817 provision.go:143] copyHostCerts
I0717 17:25:57.941591 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:25:57.941628 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem, removing ...
I0717 17:25:57.941644 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:25:57.941723 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem (1082 bytes)
I0717 17:25:57.941842 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:25:57.941865 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem, removing ...
I0717 17:25:57.941872 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:25:57.941911 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem (1123 bytes)
I0717 17:25:57.941974 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:25:57.942004 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem, removing ...
I0717 17:25:57.942014 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:25:57.942052 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem (1679 bytes)
I0717 17:25:57.942132 31817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem org=jenkins.ha-333994 san=[127.0.0.1 192.168.39.180 ha-333994 localhost minikube]
I0717 17:25:58.111694 31817 provision.go:177] copyRemoteCerts
I0717 17:25:58.111759 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 17:25:58.111785 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.114260 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.114541 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.114565 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.114746 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.114900 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.115022 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.115159 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.204834 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 17:25:58.204915 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0717 17:25:58.233451 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 17:25:58.233504 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0717 17:25:58.260715 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 17:25:58.260793 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 17:25:58.288074 31817 provision.go:87] duration metric: took 352.00837ms to configureAuth
I0717 17:25:58.288100 31817 buildroot.go:189] setting minikube options for container-runtime
I0717 17:25:58.288281 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:25:58.288301 31817 main.go:141] libmachine: Checking connection to Docker...
I0717 17:25:58.288311 31817 main.go:141] libmachine: (ha-333994) Calling .GetURL
I0717 17:25:58.289444 31817 main.go:141] libmachine: (ha-333994) DBG | Using libvirt version 6000000
I0717 17:25:58.291569 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.291932 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.291957 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.292117 31817 main.go:141] libmachine: Docker is up and running!
I0717 17:25:58.292130 31817 main.go:141] libmachine: Reticulating splines...
I0717 17:25:58.292136 31817 client.go:171] duration metric: took 20.833465773s to LocalClient.Create
I0717 17:25:58.292154 31817 start.go:167] duration metric: took 20.833518022s to libmachine.API.Create "ha-333994"
I0717 17:25:58.292162 31817 start.go:293] postStartSetup for "ha-333994" (driver="kvm2")
I0717 17:25:58.292170 31817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 17:25:58.292186 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.292380 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 17:25:58.292412 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.294705 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.294988 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.295011 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.295156 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.295308 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.295448 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.295547 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.380876 31817 ssh_runner.go:195] Run: cat /etc/os-release
I0717 17:25:58.385479 31817 info.go:137] Remote host: Buildroot 2023.02.9
I0717 17:25:58.385504 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/addons for local assets ...
I0717 17:25:58.385563 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/files for local assets ...
I0717 17:25:58.385657 31817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> 216612.pem in /etc/ssl/certs
I0717 17:25:58.385670 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /etc/ssl/certs/216612.pem
I0717 17:25:58.385792 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 17:25:58.395135 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:25:58.422415 31817 start.go:296] duration metric: took 130.238563ms for postStartSetup
I0717 17:25:58.422468 31817 main.go:141] libmachine: (ha-333994) Calling .GetConfigRaw
I0717 17:25:58.423096 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:25:58.425440 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.425742 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.425767 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.426007 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:25:58.426198 31817 start.go:128] duration metric: took 20.984856664s to createHost
I0717 17:25:58.426221 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.428248 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.428511 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.428538 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.428637 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.428826 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.428930 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.429005 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.429097 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:25:58.429257 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.180 22 <nil> <nil>}
I0717 17:25:58.429266 31817 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0717 17:25:58.543836 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237158.504657493
I0717 17:25:58.543858 31817 fix.go:216] guest clock: 1721237158.504657493
I0717 17:25:58.543867 31817 fix.go:229] Guest: 2024-07-17 17:25:58.504657493 +0000 UTC Remote: 2024-07-17 17:25:58.426211523 +0000 UTC m=+21.086147695 (delta=78.44597ms)
I0717 17:25:58.543886 31817 fix.go:200] guest clock delta is within tolerance: 78.44597ms
I0717 17:25:58.543891 31817 start.go:83] releasing machines lock for "ha-333994", held for 21.102620399s
I0717 17:25:58.543907 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.544173 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:25:58.546693 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.547047 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.547072 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.547197 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.547654 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.547823 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:25:58.547916 31817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 17:25:58.547962 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.548054 31817 ssh_runner.go:195] Run: cat /version.json
I0717 17:25:58.548080 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:25:58.550378 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.550648 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.550679 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.550978 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.550982 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.551129 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.551187 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:25:58.551227 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.551240 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:25:58.551305 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:25:58.551318 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.551480 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:25:58.551686 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:25:58.552927 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:25:58.654133 31817 ssh_runner.go:195] Run: systemctl --version
I0717 17:25:58.660072 31817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0717 17:25:58.665532 31817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 17:25:58.665586 31817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 17:25:58.682884 31817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 17:25:58.682906 31817 start.go:495] detecting cgroup driver to use...
I0717 17:25:58.682966 31817 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 17:25:58.710921 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 17:25:58.724815 31817 docker.go:217] disabling cri-docker service (if available) ...
I0717 17:25:58.724862 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 17:25:58.738870 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 17:25:58.752912 31817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 17:25:58.873905 31817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 17:25:59.009226 31817 docker.go:233] disabling docker service ...
I0717 17:25:59.009286 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 17:25:59.024317 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 17:25:59.037729 31817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 17:25:59.178928 31817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 17:25:59.308950 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 17:25:59.322702 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 17:25:59.341915 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 17:25:59.352890 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 17:25:59.363450 31817 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 17:25:59.363513 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 17:25:59.374006 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:25:59.384984 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 17:25:59.395933 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:25:59.406370 31817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 17:25:59.416834 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 17:25:59.427824 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 17:25:59.438419 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 17:25:59.448933 31817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 17:25:59.458271 31817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0717 17:25:59.458321 31817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0717 17:25:59.471288 31817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 17:25:59.480733 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:25:59.597561 31817 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 17:25:59.625448 31817 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 17:25:59.625540 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:25:59.630090 31817 retry.go:31] will retry after 1.114753424s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0717 17:26:00.745398 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:26:00.750563 31817 start.go:563] Will wait 60s for crictl version
I0717 17:26:00.750619 31817 ssh_runner.go:195] Run: which crictl
I0717 17:26:00.754270 31817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 17:26:00.794015 31817 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.19
RuntimeApiVersion: v1
I0717 17:26:00.794075 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:00.821370 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:00.850476 31817 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
I0717 17:26:00.851699 31817 main.go:141] libmachine: (ha-333994) Calling .GetIP
I0717 17:26:00.854267 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:00.854598 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:00.854625 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:00.854810 31817 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0717 17:26:00.858914 31817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 17:26:00.872028 31817 kubeadm.go:883] updating cluster {Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 17:26:00.872129 31817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 17:26:00.872173 31817 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:26:00.904349 31817 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
I0717 17:26:00.904418 31817 ssh_runner.go:195] Run: which lz4
I0717 17:26:00.908264 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0717 17:26:00.908363 31817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0717 17:26:00.912476 31817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0717 17:26:00.912508 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (394473408 bytes)
I0717 17:26:02.292043 31817 containerd.go:563] duration metric: took 1.383715694s to copy over tarball
I0717 17:26:02.292124 31817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0717 17:26:04.380435 31817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088281526s)
I0717 17:26:04.380473 31817 containerd.go:570] duration metric: took 2.088397847s to extract the tarball
I0717 17:26:04.380483 31817 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0717 17:26:04.417289 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:26:04.532503 31817 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 17:26:04.562019 31817 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:26:04.594139 31817 retry.go:31] will retry after 159.715137ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2024-07-17T17:26:04Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0717 17:26:04.754516 31817 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:26:04.790521 31817 containerd.go:627] all images are preloaded for containerd runtime.
I0717 17:26:04.790541 31817 cache_images.go:84] Images are preloaded, skipping loading
I0717 17:26:04.790548 31817 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.30.2 containerd true true} ...
I0717 17:26:04.790647 31817 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-333994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 17:26:04.790702 31817 ssh_runner.go:195] Run: sudo crictl info
I0717 17:26:04.826334 31817 cni.go:84] Creating CNI manager for ""
I0717 17:26:04.826357 31817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0717 17:26:04.826364 31817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 17:26:04.826385 31817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-333994 NodeName:ha-333994 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0717 17:26:04.826538 31817 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.180
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "ha-333994"
kubeletExtraArgs:
node-ip: 192.168.39.180
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 17:26:04.826560 31817 kube-vip.go:115] generating kube-vip config ...
I0717 17:26:04.826608 31817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0717 17:26:04.845088 31817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0717 17:26:04.845186 31817 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0717 17:26:04.845237 31817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 17:26:04.855420 31817 binaries.go:44] Found k8s binaries, skipping transfer
I0717 17:26:04.855490 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0717 17:26:04.865095 31817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
I0717 17:26:04.882653 31817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 17:26:04.899447 31817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
I0717 17:26:04.917467 31817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
I0717 17:26:04.934831 31817 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0717 17:26:04.938924 31817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 17:26:04.951512 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:26:05.064475 31817 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 17:26:05.091657 31817 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994 for IP: 192.168.39.180
I0717 17:26:05.091681 31817 certs.go:194] generating shared ca certs ...
I0717 17:26:05.091701 31817 certs.go:226] acquiring lock for ca certs: {Name:mkbd59c659d87951ff3ee355cd9afc07084cc973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.091873 31817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key
I0717 17:26:05.091927 31817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key
I0717 17:26:05.091942 31817 certs.go:256] generating profile certs ...
I0717 17:26:05.092017 31817 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key
I0717 17:26:05.092036 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt with IP's: []
I0717 17:26:05.333847 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt ...
I0717 17:26:05.333874 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt: {Name:mk777cbb40105a68e3f77323fe294b684956fe92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.334027 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key ...
I0717 17:26:05.334037 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key: {Name:mk5d028eb3d5165101367caeb298d78e1ef97418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.334107 31817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e
I0717 17:26:05.334145 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180 192.168.39.254]
I0717 17:26:05.424786 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e ...
I0717 17:26:05.424814 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e: {Name:mk0136c8aa6e3dcb0178d33e23c8a472c3572950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.424956 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e ...
I0717 17:26:05.424968 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e: {Name:mk21a2bd5914e6b9398865902ece829e628c40ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.425035 31817 certs.go:381] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.7fec389e -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt
I0717 17:26:05.425116 31817 certs.go:385] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.7fec389e -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key
I0717 17:26:05.425167 31817 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key
I0717 17:26:05.425180 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt with IP's: []
I0717 17:26:05.709359 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt ...
I0717 17:26:05.709387 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt: {Name:mk00da479f15831c3fb1174ab8fe01112b152616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.709526 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key ...
I0717 17:26:05.709536 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key: {Name:mk48280e7c358eaec39922f30f6427d18e40d4e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:05.709599 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0717 17:26:05.709615 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0717 17:26:05.709625 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0717 17:26:05.709637 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0717 17:26:05.709649 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0717 17:26:05.709664 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0717 17:26:05.709675 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0717 17:26:05.709686 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0717 17:26:05.709732 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem (1338 bytes)
W0717 17:26:05.709772 31817 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661_empty.pem, impossibly tiny 0 bytes
I0717 17:26:05.709781 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem (1679 bytes)
I0717 17:26:05.709804 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem (1082 bytes)
I0717 17:26:05.709828 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem (1123 bytes)
I0717 17:26:05.709854 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem (1679 bytes)
I0717 17:26:05.709889 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:26:05.709937 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /usr/share/ca-certificates/216612.pem
I0717 17:26:05.709953 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:05.709962 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem -> /usr/share/ca-certificates/21661.pem
I0717 17:26:05.710499 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 17:26:05.736286 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0717 17:26:05.762624 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 17:26:05.789813 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 17:26:05.816731 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0717 17:26:05.843922 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0717 17:26:05.890090 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 17:26:05.917641 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 17:26:05.942689 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /usr/share/ca-certificates/216612.pem (1708 bytes)
I0717 17:26:05.968245 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 17:26:05.991503 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem --> /usr/share/ca-certificates/21661.pem (1338 bytes)
I0717 17:26:06.014644 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 17:26:06.030964 31817 ssh_runner.go:195] Run: openssl version
I0717 17:26:06.036668 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216612.pem && ln -fs /usr/share/ca-certificates/216612.pem /etc/ssl/certs/216612.pem"
I0717 17:26:06.047444 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216612.pem
I0717 17:26:06.051872 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:21 /usr/share/ca-certificates/216612.pem
I0717 17:26:06.051933 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216612.pem
I0717 17:26:06.057696 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/216612.pem /etc/ssl/certs/3ec20f2e.0"
I0717 17:26:06.068885 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 17:26:06.079816 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:06.084516 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:13 /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:06.084582 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:06.090194 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 17:26:06.100911 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21661.pem && ln -fs /usr/share/ca-certificates/21661.pem /etc/ssl/certs/21661.pem"
I0717 17:26:06.112203 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21661.pem
I0717 17:26:06.116753 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:21 /usr/share/ca-certificates/21661.pem
I0717 17:26:06.116812 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21661.pem
I0717 17:26:06.122686 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21661.pem /etc/ssl/certs/51391683.0"
I0717 17:26:06.133462 31817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 17:26:06.137718 31817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0717 17:26:06.137774 31817 kubeadm.go:392] StartCluster: {Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 17:26:06.137852 31817 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0717 17:26:06.137906 31817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0717 17:26:06.181182 31817 cri.go:89] found id: ""
I0717 17:26:06.181252 31817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 17:26:06.191588 31817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0717 17:26:06.201776 31817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0717 17:26:06.211610 31817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0717 17:26:06.211628 31817 kubeadm.go:157] found existing configuration files:
I0717 17:26:06.211668 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0717 17:26:06.221376 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0717 17:26:06.221428 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0717 17:26:06.231162 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0717 17:26:06.240465 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0717 17:26:06.240520 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0717 17:26:06.250464 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0717 17:26:06.260016 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0717 17:26:06.260071 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0717 17:26:06.269931 31817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0717 17:26:06.279357 31817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0717 17:26:06.279423 31817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0717 17:26:06.289124 31817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0717 17:26:06.540765 31817 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0717 17:26:16.854837 31817 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
I0717 17:26:16.854895 31817 kubeadm.go:310] [preflight] Running pre-flight checks
I0717 17:26:16.854996 31817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0717 17:26:16.855136 31817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0717 17:26:16.855227 31817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0717 17:26:16.855281 31817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0717 17:26:16.856908 31817 out.go:204] - Generating certificates and keys ...
I0717 17:26:16.856974 31817 kubeadm.go:310] [certs] Using existing ca certificate authority
I0717 17:26:16.857030 31817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0717 17:26:16.857098 31817 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0717 17:26:16.857147 31817 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0717 17:26:16.857206 31817 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0717 17:26:16.857246 31817 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0717 17:26:16.857299 31817 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0717 17:26:16.857447 31817 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-333994 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
I0717 17:26:16.857539 31817 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0717 17:26:16.857713 31817 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-333994 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
I0717 17:26:16.857815 31817 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0717 17:26:16.857909 31817 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0717 17:26:16.857973 31817 kubeadm.go:310] [certs] Generating "sa" key and public key
I0717 17:26:16.858063 31817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0717 17:26:16.858158 31817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0717 17:26:16.858237 31817 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0717 17:26:16.858285 31817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0717 17:26:16.858338 31817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0717 17:26:16.858384 31817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0717 17:26:16.858464 31817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0717 17:26:16.858535 31817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0717 17:26:16.860941 31817 out.go:204] - Booting up control plane ...
I0717 17:26:16.861023 31817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0717 17:26:16.861114 31817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0717 17:26:16.861201 31817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0717 17:26:16.861312 31817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0717 17:26:16.861419 31817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0717 17:26:16.861463 31817 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0717 17:26:16.861573 31817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0717 17:26:16.861661 31817 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0717 17:26:16.861750 31817 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.96481ms
I0717 17:26:16.861834 31817 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0717 17:26:16.861884 31817 kubeadm.go:310] [api-check] The API server is healthy after 5.974489427s
I0717 17:26:16.862127 31817 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0717 17:26:16.862266 31817 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0717 17:26:16.862320 31817 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0717 17:26:16.862517 31817 kubeadm.go:310] [mark-control-plane] Marking the node ha-333994 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0717 17:26:16.862583 31817 kubeadm.go:310] [bootstrap-token] Using token: nha8at.aampri4d84mofmvm
I0717 17:26:16.863863 31817 out.go:204] - Configuring RBAC rules ...
I0717 17:26:16.863958 31817 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0717 17:26:16.864053 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0717 17:26:16.864187 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0717 17:26:16.864354 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0717 17:26:16.864468 31817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0717 17:26:16.864606 31817 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0717 17:26:16.864779 31817 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0717 17:26:16.864819 31817 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0717 17:26:16.864861 31817 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0717 17:26:16.864867 31817 kubeadm.go:310]
I0717 17:26:16.864915 31817 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0717 17:26:16.864921 31817 kubeadm.go:310]
I0717 17:26:16.864989 31817 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0717 17:26:16.865003 31817 kubeadm.go:310]
I0717 17:26:16.865036 31817 kubeadm.go:310] mkdir -p $HOME/.kube
I0717 17:26:16.865087 31817 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0717 17:26:16.865148 31817 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0717 17:26:16.865158 31817 kubeadm.go:310]
I0717 17:26:16.865241 31817 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0717 17:26:16.865256 31817 kubeadm.go:310]
I0717 17:26:16.865326 31817 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0717 17:26:16.865337 31817 kubeadm.go:310]
I0717 17:26:16.865412 31817 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0717 17:26:16.865511 31817 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0717 17:26:16.865586 31817 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0717 17:26:16.865592 31817 kubeadm.go:310]
I0717 17:26:16.865681 31817 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0717 17:26:16.865783 31817 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0717 17:26:16.865794 31817 kubeadm.go:310]
I0717 17:26:16.865910 31817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nha8at.aampri4d84mofmvm \
I0717 17:26:16.866069 31817 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a60e42bdf4c234276b18cf44d8d4bb8b184659f5dc63b21861fc880bef0ea484 \
I0717 17:26:16.866105 31817 kubeadm.go:310] --control-plane
I0717 17:26:16.866127 31817 kubeadm.go:310]
I0717 17:26:16.866222 31817 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0717 17:26:16.866229 31817 kubeadm.go:310]
I0717 17:26:16.866315 31817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nha8at.aampri4d84mofmvm \
I0717 17:26:16.866474 31817 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a60e42bdf4c234276b18cf44d8d4bb8b184659f5dc63b21861fc880bef0ea484
I0717 17:26:16.866487 31817 cni.go:84] Creating CNI manager for ""
I0717 17:26:16.866496 31817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0717 17:26:16.867885 31817 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0717 17:26:16.868963 31817 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0717 17:26:16.874562 31817 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
I0717 17:26:16.874582 31817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0717 17:26:16.893967 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0717 17:26:17.240919 31817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0717 17:26:17.241000 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:17.241050 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-333994 minikube.k8s.io/updated_at=2024_07_17T17_26_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-333994 minikube.k8s.io/primary=true
I0717 17:26:17.265880 31817 ops.go:34] apiserver oom_adj: -16
I0717 17:26:17.373587 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:17.874354 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:18.374127 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:18.874198 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:19.374489 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:19.874572 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:20.373924 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:20.874355 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:21.373893 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:21.874071 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:22.374000 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:22.873730 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:23.374382 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:23.874233 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:24.374181 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:24.874599 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:25.374533 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:25.874592 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:26.373806 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:26.874333 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:27.373913 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:27.874327 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:28.373877 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:28.873887 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:29.374632 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:29.874052 31817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 17:26:30.024970 31817 kubeadm.go:1113] duration metric: took 12.784009766s to wait for elevateKubeSystemPrivileges
I0717 17:26:30.025013 31817 kubeadm.go:394] duration metric: took 23.887240562s to StartCluster
I0717 17:26:30.025031 31817 settings.go:142] acquiring lock: {Name:mk91c7387a23a84a0d90c1f4a8be889afd5f8e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:30.025112 31817 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19283-14409/kubeconfig
I0717 17:26:30.026088 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/kubeconfig: {Name:mkcf3eba146eb28d296552e24aa3055bdbdcc231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:30.026357 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0717 17:26:30.026385 31817 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 17:26:30.026411 31817 start.go:241] waiting for startup goroutines ...
I0717 17:26:30.026428 31817 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0717 17:26:30.026497 31817 addons.go:69] Setting storage-provisioner=true in profile "ha-333994"
I0717 17:26:30.026512 31817 addons.go:69] Setting default-storageclass=true in profile "ha-333994"
I0717 17:26:30.026541 31817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-333994"
I0717 17:26:30.026571 31817 addons.go:234] Setting addon storage-provisioner=true in "ha-333994"
I0717 17:26:30.026609 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:30.026621 31817 host.go:66] Checking if "ha-333994" exists ...
I0717 17:26:30.026938 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.026980 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.026991 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.027043 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.041651 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
I0717 17:26:30.042154 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
I0717 17:26:30.042786 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.043559 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.043586 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.043583 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.044032 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.044132 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.044154 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.044459 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.044627 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:30.045452 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.045489 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.046872 31817 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19283-14409/kubeconfig
I0717 17:26:30.047164 31817 kapi.go:59] client config for ha-333994: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0717 17:26:30.047615 31817 cert_rotation.go:137] Starting client certificate rotation controller
I0717 17:26:30.047786 31817 addons.go:234] Setting addon default-storageclass=true in "ha-333994"
I0717 17:26:30.047815 31817 host.go:66] Checking if "ha-333994" exists ...
I0717 17:26:30.048048 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.048070 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.062004 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
I0717 17:26:30.062451 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.062948 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.062973 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.063274 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.063821 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:30.063852 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:30.064986 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
I0717 17:26:30.065414 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.066072 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.066093 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.066486 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.066685 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:30.068400 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:26:30.070565 31817 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0717 17:26:30.072061 31817 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0717 17:26:30.072111 31817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0717 17:26:30.072172 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:26:30.075414 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.075887 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:30.075945 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.076100 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:26:30.076283 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:26:30.076404 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:26:30.076550 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:26:30.080633 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
I0717 17:26:30.081042 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:30.081529 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:30.081553 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:30.081832 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:30.082004 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:30.083501 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:26:30.083712 31817 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0717 17:26:30.083728 31817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0717 17:26:30.083744 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:26:30.086186 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.086587 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:30.086610 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:30.086776 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:26:30.086954 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:26:30.087117 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:26:30.087256 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:26:30.228292 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0717 17:26:30.301671 31817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 17:26:30.365207 31817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0717 17:26:30.867357 31817 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0717 17:26:30.994695 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.994720 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.994814 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.994839 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.995019 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995032 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995042 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.995049 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.995083 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995094 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995102 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:30.995109 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:30.995113 31817 main.go:141] libmachine: (ha-333994) DBG | Closing plugin on server side
I0717 17:26:30.995338 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995354 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995425 31817 main.go:141] libmachine: (ha-333994) DBG | Closing plugin on server side
I0717 17:26:30.995442 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:30.995454 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:30.995583 31817 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
I0717 17:26:30.995597 31817 round_trippers.go:469] Request Headers:
I0717 17:26:30.995607 31817 round_trippers.go:473] Accept: application/json, */*
I0717 17:26:30.995615 31817 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0717 17:26:31.008616 31817 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0717 17:26:31.009189 31817 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0717 17:26:31.009203 31817 round_trippers.go:469] Request Headers:
I0717 17:26:31.009211 31817 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0717 17:26:31.009218 31817 round_trippers.go:473] Accept: application/json, */*
I0717 17:26:31.009222 31817 round_trippers.go:473] Content-Type: application/json
I0717 17:26:31.018362 31817 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0717 17:26:31.018530 31817 main.go:141] libmachine: Making call to close driver server
I0717 17:26:31.018542 31817 main.go:141] libmachine: (ha-333994) Calling .Close
I0717 17:26:31.018820 31817 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:26:31.018857 31817 main.go:141] libmachine: (ha-333994) DBG | Closing plugin on server side
I0717 17:26:31.018879 31817 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:26:31.020620 31817 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0717 17:26:31.022095 31817 addons.go:510] duration metric: took 995.669545ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0717 17:26:31.022154 31817 start.go:246] waiting for cluster config update ...
I0717 17:26:31.022168 31817 start.go:255] writing updated cluster config ...
I0717 17:26:31.023733 31817 out.go:177]
I0717 17:26:31.025261 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:31.025354 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:26:31.027151 31817 out.go:177] * Starting "ha-333994-m02" control-plane node in "ha-333994" cluster
I0717 17:26:31.028468 31817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 17:26:31.028493 31817 cache.go:56] Caching tarball of preloaded images
I0717 17:26:31.028581 31817 preload.go:172] Found /home/jenkins/minikube-integration/19283-14409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 17:26:31.028597 31817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
I0717 17:26:31.028681 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:26:31.028874 31817 start.go:360] acquireMachinesLock for ha-333994-m02: {Name:mk0f74b853b0d6e269bf0c6a25c6edeb4f1994c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 17:26:31.028940 31817 start.go:364] duration metric: took 41.632µs to acquireMachinesLock for "ha-333994-m02"
I0717 17:26:31.028968 31817 start.go:93] Provisioning new machine with config: &{Name:ha-333994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tru
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 17:26:31.029076 31817 start.go:125] createHost starting for "m02" (driver="kvm2")
I0717 17:26:31.030724 31817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0717 17:26:31.030825 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:31.030857 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:31.044970 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
I0717 17:26:31.045405 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:31.045822 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:31.045844 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:31.046177 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:31.046354 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:31.046509 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:31.046649 31817 start.go:159] libmachine.API.Create for "ha-333994" (driver="kvm2")
I0717 17:26:31.046672 31817 client.go:168] LocalClient.Create starting
I0717 17:26:31.046708 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem
I0717 17:26:31.046743 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:26:31.046763 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:26:31.046824 31817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem
I0717 17:26:31.046847 31817 main.go:141] libmachine: Decoding PEM data...
I0717 17:26:31.046863 31817 main.go:141] libmachine: Parsing certificate...
I0717 17:26:31.046888 31817 main.go:141] libmachine: Running pre-create checks...
I0717 17:26:31.046900 31817 main.go:141] libmachine: (ha-333994-m02) Calling .PreCreateCheck
I0717 17:26:31.047078 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetConfigRaw
I0717 17:26:31.047493 31817 main.go:141] libmachine: Creating machine...
I0717 17:26:31.047506 31817 main.go:141] libmachine: (ha-333994-m02) Calling .Create
I0717 17:26:31.047622 31817 main.go:141] libmachine: (ha-333994-m02) Creating KVM machine...
I0717 17:26:31.048765 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found existing default KVM network
I0717 17:26:31.048898 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found existing private KVM network mk-ha-333994
I0717 17:26:31.048996 31817 main.go:141] libmachine: (ha-333994-m02) Setting up store path in /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02 ...
I0717 17:26:31.049023 31817 main.go:141] libmachine: (ha-333994-m02) Building disk image from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
I0717 17:26:31.049102 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.048983 32198 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:26:31.049157 31817 main.go:141] libmachine: (ha-333994-m02) Downloading /home/jenkins/minikube-integration/19283-14409/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14409/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
I0717 17:26:31.264550 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.264392 32198 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa...
I0717 17:26:31.437178 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.437075 32198 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/ha-333994-m02.rawdisk...
I0717 17:26:31.437206 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Writing magic tar header
I0717 17:26:31.437216 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Writing SSH key tar header
I0717 17:26:31.437287 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:31.437231 32198 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02 ...
I0717 17:26:31.437381 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02
I0717 17:26:31.437404 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube/machines
I0717 17:26:31.437414 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02 (perms=drwx------)
I0717 17:26:31.437427 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube/machines (perms=drwxr-xr-x)
I0717 17:26:31.437434 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409/.minikube (perms=drwxr-xr-x)
I0717 17:26:31.437446 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14409 (perms=drwxrwxr-x)
I0717 17:26:31.437455 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0717 17:26:31.437469 31817 main.go:141] libmachine: (ha-333994-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0717 17:26:31.437487 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409/.minikube
I0717 17:26:31.437496 31817 main.go:141] libmachine: (ha-333994-m02) Creating domain...
I0717 17:26:31.437506 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14409
I0717 17:26:31.437514 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0717 17:26:31.437521 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home/jenkins
I0717 17:26:31.437528 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Checking permissions on dir: /home
I0717 17:26:31.437535 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Skipping /home - not owner
I0717 17:26:31.438521 31817 main.go:141] libmachine: (ha-333994-m02) define libvirt domain using xml:
I0717 17:26:31.438545 31817 main.go:141] libmachine: (ha-333994-m02) <domain type='kvm'>
I0717 17:26:31.438556 31817 main.go:141] libmachine: (ha-333994-m02) <name>ha-333994-m02</name>
I0717 17:26:31.438567 31817 main.go:141] libmachine: (ha-333994-m02) <memory unit='MiB'>2200</memory>
I0717 17:26:31.438579 31817 main.go:141] libmachine: (ha-333994-m02) <vcpu>2</vcpu>
I0717 17:26:31.438584 31817 main.go:141] libmachine: (ha-333994-m02) <features>
I0717 17:26:31.438589 31817 main.go:141] libmachine: (ha-333994-m02) <acpi/>
I0717 17:26:31.438593 31817 main.go:141] libmachine: (ha-333994-m02) <apic/>
I0717 17:26:31.438600 31817 main.go:141] libmachine: (ha-333994-m02) <pae/>
I0717 17:26:31.438604 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.438610 31817 main.go:141] libmachine: (ha-333994-m02) </features>
I0717 17:26:31.438614 31817 main.go:141] libmachine: (ha-333994-m02) <cpu mode='host-passthrough'>
I0717 17:26:31.438621 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.438628 31817 main.go:141] libmachine: (ha-333994-m02) </cpu>
I0717 17:26:31.438640 31817 main.go:141] libmachine: (ha-333994-m02) <os>
I0717 17:26:31.438654 31817 main.go:141] libmachine: (ha-333994-m02) <type>hvm</type>
I0717 17:26:31.438664 31817 main.go:141] libmachine: (ha-333994-m02) <boot dev='cdrom'/>
I0717 17:26:31.438671 31817 main.go:141] libmachine: (ha-333994-m02) <boot dev='hd'/>
I0717 17:26:31.438679 31817 main.go:141] libmachine: (ha-333994-m02) <bootmenu enable='no'/>
I0717 17:26:31.438683 31817 main.go:141] libmachine: (ha-333994-m02) </os>
I0717 17:26:31.438688 31817 main.go:141] libmachine: (ha-333994-m02) <devices>
I0717 17:26:31.438696 31817 main.go:141] libmachine: (ha-333994-m02) <disk type='file' device='cdrom'>
I0717 17:26:31.438705 31817 main.go:141] libmachine: (ha-333994-m02) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/boot2docker.iso'/>
I0717 17:26:31.438717 31817 main.go:141] libmachine: (ha-333994-m02) <target dev='hdc' bus='scsi'/>
I0717 17:26:31.438728 31817 main.go:141] libmachine: (ha-333994-m02) <readonly/>
I0717 17:26:31.438741 31817 main.go:141] libmachine: (ha-333994-m02) </disk>
I0717 17:26:31.438755 31817 main.go:141] libmachine: (ha-333994-m02) <disk type='file' device='disk'>
I0717 17:26:31.438807 31817 main.go:141] libmachine: (ha-333994-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0717 17:26:31.438833 31817 main.go:141] libmachine: (ha-333994-m02) <source file='/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/ha-333994-m02.rawdisk'/>
I0717 17:26:31.438839 31817 main.go:141] libmachine: (ha-333994-m02) <target dev='hda' bus='virtio'/>
I0717 17:26:31.438845 31817 main.go:141] libmachine: (ha-333994-m02) </disk>
I0717 17:26:31.438850 31817 main.go:141] libmachine: (ha-333994-m02) <interface type='network'>
I0717 17:26:31.438856 31817 main.go:141] libmachine: (ha-333994-m02) <source network='mk-ha-333994'/>
I0717 17:26:31.438860 31817 main.go:141] libmachine: (ha-333994-m02) <model type='virtio'/>
I0717 17:26:31.438865 31817 main.go:141] libmachine: (ha-333994-m02) </interface>
I0717 17:26:31.438871 31817 main.go:141] libmachine: (ha-333994-m02) <interface type='network'>
I0717 17:26:31.438883 31817 main.go:141] libmachine: (ha-333994-m02) <source network='default'/>
I0717 17:26:31.438890 31817 main.go:141] libmachine: (ha-333994-m02) <model type='virtio'/>
I0717 17:26:31.438898 31817 main.go:141] libmachine: (ha-333994-m02) </interface>
I0717 17:26:31.438911 31817 main.go:141] libmachine: (ha-333994-m02) <serial type='pty'>
I0717 17:26:31.438923 31817 main.go:141] libmachine: (ha-333994-m02) <target port='0'/>
I0717 17:26:31.438931 31817 main.go:141] libmachine: (ha-333994-m02) </serial>
I0717 17:26:31.438942 31817 main.go:141] libmachine: (ha-333994-m02) <console type='pty'>
I0717 17:26:31.438953 31817 main.go:141] libmachine: (ha-333994-m02) <target type='serial' port='0'/>
I0717 17:26:31.438964 31817 main.go:141] libmachine: (ha-333994-m02) </console>
I0717 17:26:31.438974 31817 main.go:141] libmachine: (ha-333994-m02) <rng model='virtio'>
I0717 17:26:31.438987 31817 main.go:141] libmachine: (ha-333994-m02) <backend model='random'>/dev/random</backend>
I0717 17:26:31.438999 31817 main.go:141] libmachine: (ha-333994-m02) </rng>
I0717 17:26:31.439010 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.439021 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.439030 31817 main.go:141] libmachine: (ha-333994-m02) </devices>
I0717 17:26:31.439039 31817 main.go:141] libmachine: (ha-333994-m02) </domain>
I0717 17:26:31.439049 31817 main.go:141] libmachine: (ha-333994-m02)
I0717 17:26:31.445546 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:e9:27:93 in network default
I0717 17:26:31.446057 31817 main.go:141] libmachine: (ha-333994-m02) Ensuring networks are active...
I0717 17:26:31.446081 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:31.446683 31817 main.go:141] libmachine: (ha-333994-m02) Ensuring network default is active
I0717 17:26:31.446957 31817 main.go:141] libmachine: (ha-333994-m02) Ensuring network mk-ha-333994 is active
I0717 17:26:31.447352 31817 main.go:141] libmachine: (ha-333994-m02) Getting domain xml...
I0717 17:26:31.447953 31817 main.go:141] libmachine: (ha-333994-m02) Creating domain...
I0717 17:26:32.668554 31817 main.go:141] libmachine: (ha-333994-m02) Waiting to get IP...
I0717 17:26:32.669421 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:32.669837 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:32.669869 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:32.669821 32198 retry.go:31] will retry after 265.908605ms: waiting for machine to come up
I0717 17:26:32.937392 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:32.937818 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:32.937841 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:32.937787 32198 retry.go:31] will retry after 263.816332ms: waiting for machine to come up
I0717 17:26:33.203484 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:33.203889 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:33.203915 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:33.203865 32198 retry.go:31] will retry after 370.046003ms: waiting for machine to come up
I0717 17:26:33.575157 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:33.575547 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:33.575577 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:33.575470 32198 retry.go:31] will retry after 487.691796ms: waiting for machine to come up
I0717 17:26:34.065171 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:34.065647 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:34.065668 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:34.065610 32198 retry.go:31] will retry after 737.756145ms: waiting for machine to come up
I0717 17:26:34.804469 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:34.804805 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:34.804833 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:34.804748 32198 retry.go:31] will retry after 716.008929ms: waiting for machine to come up
I0717 17:26:35.522742 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:35.523151 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:35.523175 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:35.523122 32198 retry.go:31] will retry after 1.039877882s: waiting for machine to come up
I0717 17:26:36.564784 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:36.565187 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:36.565236 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:36.565168 32198 retry.go:31] will retry after 946.347249ms: waiting for machine to come up
I0717 17:26:37.513629 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:37.514132 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:37.514159 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:37.514078 32198 retry.go:31] will retry after 1.425543571s: waiting for machine to come up
I0717 17:26:38.941439 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:38.941914 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:38.941941 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:38.941867 32198 retry.go:31] will retry after 2.252250366s: waiting for machine to come up
I0717 17:26:41.195297 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:41.195830 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:41.195853 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:41.195783 32198 retry.go:31] will retry after 2.725572397s: waiting for machine to come up
I0717 17:26:43.922616 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:43.923015 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:43.923039 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:43.922970 32198 retry.go:31] will retry after 3.508475549s: waiting for machine to come up
I0717 17:26:47.432839 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:47.433277 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find current IP address of domain ha-333994-m02 in network mk-ha-333994
I0717 17:26:47.433306 31817 main.go:141] libmachine: (ha-333994-m02) DBG | I0717 17:26:47.433245 32198 retry.go:31] will retry after 3.328040591s: waiting for machine to come up
I0717 17:26:50.765649 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:50.766087 31817 main.go:141] libmachine: (ha-333994-m02) Found IP for machine: 192.168.39.127
I0717 17:26:50.766108 31817 main.go:141] libmachine: (ha-333994-m02) Reserving static IP address...
I0717 17:26:50.766147 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has current primary IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:50.766429 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find host DHCP lease matching {name: "ha-333994-m02", mac: "52:54:00:b1:0f:81", ip: "192.168.39.127"} in network mk-ha-333994
I0717 17:26:50.835843 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Getting to WaitForSSH function...
I0717 17:26:50.835875 31817 main.go:141] libmachine: (ha-333994-m02) Reserved static IP address: 192.168.39.127
I0717 17:26:50.835890 31817 main.go:141] libmachine: (ha-333994-m02) Waiting for SSH to be available...
I0717 17:26:50.838442 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:50.838833 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994
I0717 17:26:50.838858 31817 main.go:141] libmachine: (ha-333994-m02) DBG | unable to find defined IP address of network mk-ha-333994 interface with MAC address 52:54:00:b1:0f:81
I0717 17:26:50.839017 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH client type: external
I0717 17:26:50.839052 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa (-rw-------)
I0717 17:26:50.839081 31817 main.go:141] libmachine: (ha-333994-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0717 17:26:50.839104 31817 main.go:141] libmachine: (ha-333994-m02) DBG | About to run SSH command:
I0717 17:26:50.839121 31817 main.go:141] libmachine: (ha-333994-m02) DBG | exit 0
I0717 17:26:50.842964 31817 main.go:141] libmachine: (ha-333994-m02) DBG | SSH cmd err, output: exit status 255:
I0717 17:26:50.842984 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0717 17:26:50.842995 31817 main.go:141] libmachine: (ha-333994-m02) DBG | command : exit 0
I0717 17:26:50.843004 31817 main.go:141] libmachine: (ha-333994-m02) DBG | err : exit status 255
I0717 17:26:50.843028 31817 main.go:141] libmachine: (ha-333994-m02) DBG | output :
I0717 17:26:53.843162 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Getting to WaitForSSH function...
I0717 17:26:53.845524 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.845912 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:53.845964 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.846160 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH client type: external
I0717 17:26:53.846190 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa (-rw-------)
I0717 17:26:53.846218 31817 main.go:141] libmachine: (ha-333994-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0717 17:26:53.846237 31817 main.go:141] libmachine: (ha-333994-m02) DBG | About to run SSH command:
I0717 17:26:53.846249 31817 main.go:141] libmachine: (ha-333994-m02) DBG | exit 0
I0717 17:26:53.977891 31817 main.go:141] libmachine: (ha-333994-m02) DBG | SSH cmd err, output: <nil>:
I0717 17:26:53.978192 31817 main.go:141] libmachine: (ha-333994-m02) KVM machine creation complete!
I0717 17:26:53.978493 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetConfigRaw
I0717 17:26:53.979005 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:53.979196 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:53.979349 31817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0717 17:26:53.979361 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetState
I0717 17:26:53.980446 31817 main.go:141] libmachine: Detecting operating system of created instance...
I0717 17:26:53.980458 31817 main.go:141] libmachine: Waiting for SSH to be available...
I0717 17:26:53.980463 31817 main.go:141] libmachine: Getting to WaitForSSH function...
I0717 17:26:53.980469 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:53.982666 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.983028 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:53.983061 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:53.983193 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:53.983351 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:53.983482 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:53.983592 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:53.983736 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:53.983941 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:53.983953 31817 main.go:141] libmachine: About to run SSH command:
exit 0
I0717 17:26:54.097606 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:26:54.097631 31817 main.go:141] libmachine: Detecting the provisioner...
I0717 17:26:54.097638 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.100274 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.100592 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.100626 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.100772 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.100954 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.101115 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.101230 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.101387 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:54.101557 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:54.101569 31817 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0717 17:26:54.214758 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0717 17:26:54.214823 31817 main.go:141] libmachine: found compatible host: buildroot
I0717 17:26:54.214832 31817 main.go:141] libmachine: Provisioning with buildroot...
I0717 17:26:54.214839 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:54.215071 31817 buildroot.go:166] provisioning hostname "ha-333994-m02"
I0717 17:26:54.215095 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:54.215281 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.217709 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.218130 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.218157 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.218274 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.218456 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.218598 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.218743 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.218879 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:54.219074 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:54.219087 31817 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-333994-m02 && echo "ha-333994-m02" | sudo tee /etc/hostname
I0717 17:26:54.348717 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-333994-m02
I0717 17:26:54.348783 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.351584 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.351923 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.351944 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.352126 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.352288 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.352474 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.352599 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.352725 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:54.352881 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:54.352895 31817 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-333994-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-333994-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-333994-m02' | sudo tee -a /etc/hosts;
fi
fi
I0717 17:26:54.476331 31817 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 17:26:54.476371 31817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14409/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14409/.minikube}
I0717 17:26:54.476397 31817 buildroot.go:174] setting up certificates
I0717 17:26:54.476416 31817 provision.go:84] configureAuth start
I0717 17:26:54.476438 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetMachineName
I0717 17:26:54.476719 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:54.479208 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.479564 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.479592 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.479788 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.481800 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.482086 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.482109 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.482263 31817 provision.go:143] copyHostCerts
I0717 17:26:54.482290 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:26:54.482319 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem, removing ...
I0717 17:26:54.482328 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem
I0717 17:26:54.482388 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/ca.pem (1082 bytes)
I0717 17:26:54.482455 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:26:54.482472 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem, removing ...
I0717 17:26:54.482478 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem
I0717 17:26:54.482502 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/cert.pem (1123 bytes)
I0717 17:26:54.482542 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:26:54.482558 31817 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem, removing ...
I0717 17:26:54.482564 31817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem
I0717 17:26:54.482584 31817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14409/.minikube/key.pem (1679 bytes)
I0717 17:26:54.482627 31817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem org=jenkins.ha-333994-m02 san=[127.0.0.1 192.168.39.127 ha-333994-m02 localhost minikube]
I0717 17:26:54.697157 31817 provision.go:177] copyRemoteCerts
I0717 17:26:54.697210 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 17:26:54.697233 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.699959 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.700263 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.700281 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.700480 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.700699 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.700860 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.701000 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
I0717 17:26:54.792678 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 17:26:54.792760 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0717 17:26:54.816985 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 17:26:54.817058 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0717 17:26:54.841268 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 17:26:54.841343 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 17:26:54.865093 31817 provision.go:87] duration metric: took 388.663223ms to configureAuth
I0717 17:26:54.865120 31817 buildroot.go:189] setting minikube options for container-runtime
I0717 17:26:54.865311 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:54.865337 31817 main.go:141] libmachine: Checking connection to Docker...
I0717 17:26:54.865347 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetURL
I0717 17:26:54.866495 31817 main.go:141] libmachine: (ha-333994-m02) DBG | Using libvirt version 6000000
I0717 17:26:54.868417 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.868765 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.868792 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.868933 31817 main.go:141] libmachine: Docker is up and running!
I0717 17:26:54.868949 31817 main.go:141] libmachine: Reticulating splines...
I0717 17:26:54.868955 31817 client.go:171] duration metric: took 23.822273283s to LocalClient.Create
I0717 17:26:54.868974 31817 start.go:167] duration metric: took 23.822329608s to libmachine.API.Create "ha-333994"
I0717 17:26:54.868982 31817 start.go:293] postStartSetup for "ha-333994-m02" (driver="kvm2")
I0717 17:26:54.868990 31817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 17:26:54.869011 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:54.869243 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 17:26:54.869264 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:54.871450 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.871816 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:54.871840 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:54.872022 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:54.872180 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:54.872326 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:54.872476 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
I0717 17:26:54.961235 31817 ssh_runner.go:195] Run: cat /etc/os-release
I0717 17:26:54.965604 31817 info.go:137] Remote host: Buildroot 2023.02.9
I0717 17:26:54.965626 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/addons for local assets ...
I0717 17:26:54.965684 31817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14409/.minikube/files for local assets ...
I0717 17:26:54.965757 31817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> 216612.pem in /etc/ssl/certs
I0717 17:26:54.965766 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /etc/ssl/certs/216612.pem
I0717 17:26:54.965847 31817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 17:26:54.975595 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:26:54.999236 31817 start.go:296] duration metric: took 130.241349ms for postStartSetup
I0717 17:26:54.999289 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetConfigRaw
I0717 17:26:54.999814 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:55.002512 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.002864 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.002901 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.003161 31817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/config.json ...
I0717 17:26:55.003366 31817 start.go:128] duration metric: took 23.974275382s to createHost
I0717 17:26:55.003388 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:55.005328 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.005632 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.005656 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.005830 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:55.006002 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.006161 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.006292 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:55.006451 31817 main.go:141] libmachine: Using SSH client type: native
I0717 17:26:55.006637 31817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil> [] 0s} 192.168.39.127 22 <nil> <nil>}
I0717 17:26:55.006649 31817 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0717 17:26:55.122903 31817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237215.098211807
I0717 17:26:55.122928 31817 fix.go:216] guest clock: 1721237215.098211807
I0717 17:26:55.122937 31817 fix.go:229] Guest: 2024-07-17 17:26:55.098211807 +0000 UTC Remote: 2024-07-17 17:26:55.003376883 +0000 UTC m=+77.663313056 (delta=94.834924ms)
I0717 17:26:55.122956 31817 fix.go:200] guest clock delta is within tolerance: 94.834924ms
I0717 17:26:55.122962 31817 start.go:83] releasing machines lock for "ha-333994-m02", held for 24.094009758s
I0717 17:26:55.122986 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.123244 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:55.125631 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.125927 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.125955 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.128661 31817 out.go:177] * Found network options:
I0717 17:26:55.130349 31817 out.go:177] - NO_PROXY=192.168.39.180
W0717 17:26:55.131717 31817 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 17:26:55.131742 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.132304 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.132476 31817 main.go:141] libmachine: (ha-333994-m02) Calling .DriverName
I0717 17:26:55.132554 31817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 17:26:55.132594 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
W0717 17:26:55.132666 31817 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 17:26:55.132744 31817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 17:26:55.132772 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHHostname
I0717 17:26:55.135185 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135477 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.135501 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135519 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135642 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:55.135817 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.135976 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:55.135995 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:55.135977 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:55.136127 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHPort
I0717 17:26:55.136190 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
I0717 17:26:55.136268 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHKeyPath
I0717 17:26:55.136402 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetSSHUsername
I0717 17:26:55.136527 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994-m02/id_rsa Username:docker}
W0717 17:26:55.220815 31817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 17:26:55.220875 31817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 17:26:55.245507 31817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 17:26:55.245531 31817 start.go:495] detecting cgroup driver to use...
I0717 17:26:55.245596 31817 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 17:26:55.278918 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 17:26:55.292940 31817 docker.go:217] disabling cri-docker service (if available) ...
I0717 17:26:55.293020 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 17:26:55.306646 31817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 17:26:55.321727 31817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 17:26:55.453026 31817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 17:26:55.618252 31817 docker.go:233] disabling docker service ...
I0717 17:26:55.618323 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 17:26:55.633535 31817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 17:26:55.647399 31817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 17:26:55.767544 31817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 17:26:55.888191 31817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 17:26:55.901625 31817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 17:26:55.919869 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 17:26:55.930472 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 17:26:55.940635 31817 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 17:26:55.940681 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 17:26:55.950966 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:26:55.961459 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 17:26:55.972051 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 17:26:55.983017 31817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 17:26:55.993746 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 17:26:56.004081 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 17:26:56.014291 31817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 17:26:56.024660 31817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 17:26:56.033932 31817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0717 17:26:56.033978 31817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0717 17:26:56.047409 31817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 17:26:56.057123 31817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 17:26:56.196097 31817 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 17:26:56.227087 31817 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 17:26:56.227147 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:26:56.232659 31817 retry.go:31] will retry after 933.236719ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0717 17:26:57.166776 31817 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 17:26:57.172003 31817 start.go:563] Will wait 60s for crictl version
I0717 17:26:57.172071 31817 ssh_runner.go:195] Run: which crictl
I0717 17:26:57.176036 31817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 17:26:57.214182 31817 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.19
RuntimeApiVersion: v1
I0717 17:26:57.214259 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:57.239883 31817 ssh_runner.go:195] Run: containerd --version
I0717 17:26:57.270199 31817 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
I0717 17:26:57.271461 31817 out.go:177] - env NO_PROXY=192.168.39.180
I0717 17:26:57.272522 31817 main.go:141] libmachine: (ha-333994-m02) Calling .GetIP
I0717 17:26:57.274799 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:57.275154 31817 main.go:141] libmachine: (ha-333994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:0f:81", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:26:45 +0000 UTC Type:0 Mac:52:54:00:b1:0f:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-333994-m02 Clientid:01:52:54:00:b1:0f:81}
I0717 17:26:57.275183 31817 main.go:141] libmachine: (ha-333994-m02) DBG | domain ha-333994-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:b1:0f:81 in network mk-ha-333994
I0717 17:26:57.275351 31817 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0717 17:26:57.279650 31817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 17:26:57.293824 31817 mustload.go:65] Loading cluster: ha-333994
I0717 17:26:57.294006 31817 config.go:182] Loaded profile config "ha-333994": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 17:26:57.294269 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:57.294293 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:57.308598 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
I0717 17:26:57.309000 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:57.309480 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:57.309502 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:57.309752 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:57.309903 31817 main.go:141] libmachine: (ha-333994) Calling .GetState
I0717 17:26:57.311534 31817 host.go:66] Checking if "ha-333994" exists ...
I0717 17:26:57.311828 31817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0717 17:26:57.311870 31817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:26:57.326228 31817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
I0717 17:26:57.326552 31817 main.go:141] libmachine: () Calling .GetVersion
I0717 17:26:57.327001 31817 main.go:141] libmachine: Using API Version 1
I0717 17:26:57.327022 31817 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:26:57.327287 31817 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:26:57.327462 31817 main.go:141] libmachine: (ha-333994) Calling .DriverName
I0717 17:26:57.327619 31817 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994 for IP: 192.168.39.127
I0717 17:26:57.327627 31817 certs.go:194] generating shared ca certs ...
I0717 17:26:57.327639 31817 certs.go:226] acquiring lock for ca certs: {Name:mkbd59c659d87951ff3ee355cd9afc07084cc973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:57.327753 31817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key
I0717 17:26:57.327802 31817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key
I0717 17:26:57.327812 31817 certs.go:256] generating profile certs ...
I0717 17:26:57.327877 31817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/client.key
I0717 17:26:57.327900 31817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff
I0717 17:26:57.327913 31817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180 192.168.39.127 192.168.39.254]
I0717 17:26:57.458239 31817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff ...
I0717 17:26:57.458268 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff: {Name:mke87290a04a64b5c9a3f70eca7bbd7f3ab62e57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:57.458428 31817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff ...
I0717 17:26:57.458440 31817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff: {Name:mkcd9a6c319770e7232a22dd759a83106e261b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 17:26:57.458506 31817 certs.go:381] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt.3a75f3ff -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt
I0717 17:26:57.458644 31817 certs.go:385] copying /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key.3a75f3ff -> /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key
I0717 17:26:57.458768 31817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key
I0717 17:26:57.458782 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0717 17:26:57.458794 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0717 17:26:57.458806 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0717 17:26:57.458818 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0717 17:26:57.458830 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0717 17:26:57.458841 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0717 17:26:57.458852 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0717 17:26:57.458865 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0717 17:26:57.458910 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem (1338 bytes)
W0717 17:26:57.458936 31817 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661_empty.pem, impossibly tiny 0 bytes
I0717 17:26:57.458945 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca-key.pem (1679 bytes)
I0717 17:26:57.458966 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/ca.pem (1082 bytes)
I0717 17:26:57.458986 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/cert.pem (1123 bytes)
I0717 17:26:57.459013 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/key.pem (1679 bytes)
I0717 17:26:57.459048 31817 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem (1708 bytes)
I0717 17:26:57.459071 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem -> /usr/share/ca-certificates/216612.pem
I0717 17:26:57.459084 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:57.459095 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem -> /usr/share/ca-certificates/21661.pem
I0717 17:26:57.459124 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHHostname
I0717 17:26:57.461994 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:57.462403 31817 main.go:141] libmachine: (ha-333994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4b:68", ip: ""} in network mk-ha-333994: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:51 +0000 UTC Type:0 Mac:52:54:00:73:4b:68 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-333994 Clientid:01:52:54:00:73:4b:68}
I0717 17:26:57.462430 31817 main.go:141] libmachine: (ha-333994) DBG | domain ha-333994 has defined IP address 192.168.39.180 and MAC address 52:54:00:73:4b:68 in network mk-ha-333994
I0717 17:26:57.462587 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHPort
I0717 17:26:57.462744 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHKeyPath
I0717 17:26:57.462905 31817 main.go:141] libmachine: (ha-333994) Calling .GetSSHUsername
I0717 17:26:57.462996 31817 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14409/.minikube/machines/ha-333994/id_rsa Username:docker}
I0717 17:26:57.538412 31817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0717 17:26:57.543898 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0717 17:26:57.556474 31817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0717 17:26:57.560660 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0717 17:26:57.570923 31817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0717 17:26:57.574879 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0717 17:26:57.585092 31817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0717 17:26:57.589304 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I0717 17:26:57.599639 31817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0717 17:26:57.603878 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0717 17:26:57.616227 31817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0717 17:26:57.620350 31817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0717 17:26:57.632125 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 17:26:57.657494 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0717 17:26:57.682754 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 17:26:57.707851 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 17:26:57.731860 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0717 17:26:57.757707 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0717 17:26:57.781205 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 17:26:57.804275 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/profiles/ha-333994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 17:26:57.829670 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/files/etc/ssl/certs/216612.pem --> /usr/share/ca-certificates/216612.pem (1708 bytes)
I0717 17:26:57.855063 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 17:26:57.881215 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/certs/21661.pem --> /usr/share/ca-certificates/21661.pem (1338 bytes)
I0717 17:26:57.906393 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0717 17:26:57.924441 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0717 17:26:57.942446 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0717 17:26:57.958731 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I0717 17:26:57.974971 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0717 17:26:57.991007 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0717 17:26:58.006856 31817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0717 17:26:58.023616 31817 ssh_runner.go:195] Run: openssl version
I0717 17:26:58.029309 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216612.pem && ln -fs /usr/share/ca-certificates/216612.pem /etc/ssl/certs/216612.pem"
I0717 17:26:58.040022 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216612.pem
I0717 17:26:58.044627 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:21 /usr/share/ca-certificates/216612.pem
I0717 17:26:58.044684 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216612.pem
I0717 17:26:58.050556 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/216612.pem /etc/ssl/certs/3ec20f2e.0"
I0717 17:26:58.060921 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 17:26:58.071585 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:58.075832 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:13 /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:58.075882 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 17:26:58.081281 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 17:26:58.091769 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21661.pem && ln -fs /usr/share/ca-certificates/21661.pem /etc/ssl/certs/21661.pem"
I0717 17:26:58.102180 31817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21661.pem
I0717 17:26:58.106524 31817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:21 /usr/share/ca-certificates/21661.pem
I0717 17:26:58.106575 31817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21661.pem
I0717 17:26:58.112063 31817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21661.pem /etc/ssl/certs/51391683.0"
I0717 17:26:58.122675 31817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 17:26:58.126524 31817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0717 17:26:58.126576 31817 kubeadm.go:934] updating node {m02 192.168.39.127 8443 v1.30.2 containerd true true} ...
I0717 17:26:58.126678 31817 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-333994-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-333994 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 17:26:58.126707 31817 kube-vip.go:115] generating kube-vip config ...
I0717 17:26:58.126735 31817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0717 17:26:58.143233 31817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0717 17:26:58.143291 31817 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0717 17:26:58.143334 31817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 17:26:58.153157 31817 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
Initiating transfer...
I0717 17:26:58.153211 31817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
I0717 17:26:58.162734 31817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
I0717 17:26:58.162759 31817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
I0717 17:26:58.162833 31817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
I0717 17:26:58.162840 31817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubelet
I0717 17:26:58.162877 31817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubeadm
I0717 17:26:58.167096 31817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
I0717 17:26:58.167122 31817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
I0717 17:27:14.120624 31817 out.go:177]
W0717 17:27:14.122586 31817 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19283-14409/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920 0x49ca920] Decompressors:map[bz2:0xc000883490 gz:0xc000883498 tar:0xc000883440 tar.bz2:0xc000883450 tar.gz:0xc000883460 tar.xz:0xc000883470 tar.zst:0xc000883480 tbz2:0xc000883450 tgz:0xc000883460 txz:0xc000883470 tzst:0xc000883480 xz:0xc0008834a0 zip:0xc0008834b0 zst:0xc0008834a8] Getters:map[file:0xc000691350 http:0x
c0009febe0 https:0xc0009fec30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.194.0.2:36556->151.101.193.55:443: read: connection reset by peer
W0717 17:27:14.122605 31817 out.go:239] *
W0717 17:27:14.123461 31817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 17:27:14.125013 31817 out.go:177]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
86b483ab22e1a 6e38f40d628db 27 seconds ago Running storage-provisioner 0 4ae1e67fc3bab storage-provisioner
dcb6f2bdfe23d cbb01a7bd410d 27 seconds ago Running coredns 0 3e096287e39aa coredns-7db6d8ff4d-n4xtd
5e03d17e52e34 cbb01a7bd410d 27 seconds ago Running coredns 0 a55470f3593c5 coredns-7db6d8ff4d-sh96r
f1b88563e61d6 5cc3abe5717db 39 seconds ago Running kindnet-cni 0 18bb6baa955c0 kindnet-5zksq
0a2a73f6200a3 53c535741fb44 44 seconds ago Running kube-proxy 0 44d5a25817f0f kube-proxy-jlzt5
2030e6caab488 38af8ddebf499 59 seconds ago Running kube-vip 0 08971202a22cc kube-vip-ha-333994
d3a0374a88e2c 56ce0fd9fb532 About a minute ago Running kube-apiserver 0 69d556e9fd975 kube-apiserver-ha-333994
2f62c96e1a784 7820c83aa1394 About a minute ago Running kube-scheduler 0 14cc4b6f0a671 kube-scheduler-ha-333994
5f332be219358 3861cfcd7c04c About a minute ago Running etcd 0 2fa30f34188fb etcd-ha-333994
515c5ff9f46da e874818b3caac About a minute ago Running kube-controller-manager 0 800370bd69668 kube-controller-manager-ha-333994
==> containerd <==
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.069323091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.069416728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.092092406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.092222348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.092248869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.092335207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.111018825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.111107906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.111124103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.111525114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.203194655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sh96r,Uid:40fe2cb3-25ad-4d21-a67c-16752d657439,Namespace:kube-system,Attempt:0,} returns sandbox id \"a55470f3593c58d278ff17cf8fd31c0bbba9c51036939baae2b698a9a530e069\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.209180903Z" level=info msg="CreateContainer within sandbox \"a55470f3593c58d278ff17cf8fd31c0bbba9c51036939baae2b698a9a530e069\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.224900705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n4xtd,Uid:29a654a4-f52d-4594-b402-93061221e0e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e096287e39aa2659fbac6271df8b9e49c2f98bff34a88e616d0f4d213890d29\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.227613767Z" level=info msg="CreateContainer within sandbox \"3e096287e39aa2659fbac6271df8b9e49c2f98bff34a88e616d0f4d213890d29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.255811544Z" level=info msg="CreateContainer within sandbox \"a55470f3593c58d278ff17cf8fd31c0bbba9c51036939baae2b698a9a530e069\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e03d17e52e34f0695bfa49800923a86525fd46883d344192dfddffda1bb3e8a\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.256711991Z" level=info msg="StartContainer for \"5e03d17e52e34f0695bfa49800923a86525fd46883d344192dfddffda1bb3e8a\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.269282488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:123c311b-67ed-42b2-ad53-cc59077dfbe7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ae1e67fc3bab5bbd9a5e5575cb054716cb84745a6c3f9dcbd0081499baa6010\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.272818878Z" level=info msg="CreateContainer within sandbox \"4ae1e67fc3bab5bbd9a5e5575cb054716cb84745a6c3f9dcbd0081499baa6010\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.281551441Z" level=info msg="CreateContainer within sandbox \"3e096287e39aa2659fbac6271df8b9e49c2f98bff34a88e616d0f4d213890d29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcb6f2bdfe23d3e6924f51ebb8a33d8431d3ee154daf348c93ed18f38d0c971f\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.282808085Z" level=info msg="StartContainer for \"dcb6f2bdfe23d3e6924f51ebb8a33d8431d3ee154daf348c93ed18f38d0c971f\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.306661258Z" level=info msg="CreateContainer within sandbox \"4ae1e67fc3bab5bbd9a5e5575cb054716cb84745a6c3f9dcbd0081499baa6010\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"86b483ab22e1a88b745f12d55b1fa66f91f47882547e5407707e50180e29df21\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.308244470Z" level=info msg="StartContainer for \"86b483ab22e1a88b745f12d55b1fa66f91f47882547e5407707e50180e29df21\""
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.405145943Z" level=info msg="StartContainer for \"5e03d17e52e34f0695bfa49800923a86525fd46883d344192dfddffda1bb3e8a\" returns successfully"
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.416098689Z" level=info msg="StartContainer for \"dcb6f2bdfe23d3e6924f51ebb8a33d8431d3ee154daf348c93ed18f38d0c971f\" returns successfully"
Jul 17 17:26:47 ha-333994 containerd[645]: time="2024-07-17T17:26:47.459142473Z" level=info msg="StartContainer for \"86b483ab22e1a88b745f12d55b1fa66f91f47882547e5407707e50180e29df21\" returns successfully"
==> coredns [5e03d17e52e34f0695bfa49800923a86525fd46883d344192dfddffda1bb3e8a] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:45601 - 22388 "HINFO IN 667985956384862735.408586044970053011. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010632325s
==> coredns [dcb6f2bdfe23d3e6924f51ebb8a33d8431d3ee154daf348c93ed18f38d0c971f] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:37241 - 12580 "HINFO IN 7703422814786955468.6939822740795333208. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008540763s
==> describe nodes <==
Name: ha-333994
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-333994
kubernetes.io/os=linux
minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
minikube.k8s.io/name=ha-333994
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_17T17_26_17_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Jul 2024 17:26:15 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-333994
AcquireTime: <unset>
RenewTime: Wed, 17 Jul 2024 17:27:07 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Jul 2024 17:26:46 +0000 Wed, 17 Jul 2024 17:26:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Jul 2024 17:26:46 +0000 Wed, 17 Jul 2024 17:26:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Jul 2024 17:26:46 +0000 Wed, 17 Jul 2024 17:26:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Jul 2024 17:26:46 +0000 Wed, 17 Jul 2024 17:26:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.180
Hostname: ha-333994
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: da3e8959a305489b85ad0eed18b3234d
System UUID: da3e8959-a305-489b-85ad-0eed18b3234d
Boot ID: b53aa9e9-08a4-4435-bef0-7135f94a954e
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.19
Kubelet Version: v1.30.2
Kube-Proxy Version: v1.30.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7db6d8ff4d-n4xtd 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 45s
kube-system coredns-7db6d8ff4d-sh96r 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 45s
kube-system etcd-ha-333994 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 59s
kube-system kindnet-5zksq 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 46s
kube-system kube-apiserver-ha-333994 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 59s
kube-system kube-controller-manager-ha-333994 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 59s
kube-system kube-proxy-jlzt5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46s
kube-system kube-scheduler-ha-333994 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 59s
kube-system kube-vip-ha-333994 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 59s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 45s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING)
memory 290Mi (13%!)(MISSING) 390Mi (18%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 44s kube-proxy
Normal Starting 66s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 66s (x4 over 66s) kubelet Node ha-333994 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 66s (x4 over 66s) kubelet Node ha-333994 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 66s (x3 over 66s) kubelet Node ha-333994 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 66s kubelet Updated Node Allocatable limit across pods
Normal Starting 59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 59s kubelet Node ha-333994 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 59s kubelet Node ha-333994 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 59s kubelet Node ha-333994 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 59s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 46s node-controller Node ha-333994 event: Registered Node ha-333994 in Controller
Normal NodeReady 29s kubelet Node ha-333994 status is now: NodeReady
==> dmesg <==
[Jul17 17:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.050377] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.040128] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.544620] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.311602] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +4.612117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.994239] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
[ +0.059342] kauditd_printk_skb: 1 callbacks suppressed
[ +0.054424] systemd-fstab-generator[521]: Ignoring "noauto" option for root device
[ +0.171527] systemd-fstab-generator[535]: Ignoring "noauto" option for root device
[ +0.142059] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
[ +0.293838] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
[Jul17 17:26] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
[ +0.060652] kauditd_printk_skb: 158 callbacks suppressed
[ +0.475443] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
[ +3.877515] systemd-fstab-generator[863]: Ignoring "noauto" option for root device
[ +1.168977] kauditd_printk_skb: 85 callbacks suppressed
[ +5.141999] kauditd_printk_skb: 35 callbacks suppressed
[ +0.960648] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
[ +5.705099] kauditd_printk_skb: 23 callbacks suppressed
[ +13.765378] kauditd_printk_skb: 29 callbacks suppressed
==> etcd [5f332be219358a1962906c8879dc8340cacfe7b8d5b0e42191706a9d9285ef46] <==
{"level":"info","ts":"2024-07-17T17:26:10.567184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
{"level":"info","ts":"2024-07-17T17:26:10.569058Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
{"level":"info","ts":"2024-07-17T17:26:10.569991Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-07-17T17:26:10.574483Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-07-17T17:26:10.574541Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-07-17T17:26:10.574981Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.180:2380"}
{"level":"info","ts":"2024-07-17T17:26:10.5751Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.180:2380"}
{"level":"info","ts":"2024-07-17T17:26:10.795898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 is starting a new election at term 1"}
{"level":"info","ts":"2024-07-17T17:26:10.796088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became pre-candidate at term 1"}
{"level":"info","ts":"2024-07-17T17:26:10.796202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgPreVoteResp from b38c55c42a3b698 at term 1"}
{"level":"info","ts":"2024-07-17T17:26:10.796264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became candidate at term 2"}
{"level":"info","ts":"2024-07-17T17:26:10.79633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgVoteResp from b38c55c42a3b698 at term 2"}
{"level":"info","ts":"2024-07-17T17:26:10.79643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became leader at term 2"}
{"level":"info","ts":"2024-07-17T17:26:10.796478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b38c55c42a3b698 elected leader b38c55c42a3b698 at term 2"}
{"level":"info","ts":"2024-07-17T17:26:10.801067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b38c55c42a3b698","local-member-attributes":"{Name:ha-333994 ClientURLs:[https://192.168.39.180:2379]}","request-path":"/0/members/b38c55c42a3b698/attributes","cluster-id":"5a7d3c553a64e690","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-17T17:26:10.801194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-17T17:26:10.801316Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T17:26:10.806906Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-17T17:26:10.807031Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-17T17:26:10.812458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
{"level":"info","ts":"2024-07-17T17:26:10.801338Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-17T17:26:10.817184Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T17:26:10.817367Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T17:26:10.817882Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T17:26:10.819447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> kernel <==
17:27:15 up 1 min, 0 users, load average: 0.68, 0.29, 0.10
Linux ha-333994 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kindnet [f1b88563e61d620b61da7e9c081cadd03d26d579ae84f2cad14d040ee1854428] <==
I0717 17:26:35.792111 1 main.go:110] connected to apiserver: https://10.96.0.1:443
I0717 17:26:35.883368 1 main.go:140] hostIP = 192.168.39.180
podIP = 192.168.39.180
I0717 17:26:35.883668 1 main.go:149] setting mtu 1500 for CNI
I0717 17:26:35.883736 1 main.go:179] kindnetd IP family: "ipv4"
I0717 17:26:35.883770 1 main.go:183] noMask IPv4 subnets: [10.244.0.0/16]
I0717 17:26:36.593010 1 main.go:223] Error initializing nftables: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
add table inet kube-network-policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
, skipping network policies
I0717 17:26:46.602201 1 main.go:299] Handling node with IPs: map[192.168.39.180:{}]
I0717 17:26:46.602460 1 main.go:303] handling current node
I0717 17:26:56.596540 1 main.go:299] Handling node with IPs: map[192.168.39.180:{}]
I0717 17:26:56.596752 1 main.go:303] handling current node
I0717 17:27:06.600804 1 main.go:299] Handling node with IPs: map[192.168.39.180:{}]
I0717 17:27:06.600898 1 main.go:303] handling current node
==> kube-apiserver [d3a0374a88e2c013e134eec1052b56a531aae862faa0eb5bb6e6411c1d40d411] <==
I0717 17:26:12.626156 1 shared_informer.go:320] Caches are synced for node_authorizer
I0717 17:26:12.627422 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0717 17:26:12.627461 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0717 17:26:12.633544 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0717 17:26:12.633578 1 policy_source.go:224] refreshing policies
E0717 17:26:12.663111 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
E0717 17:26:12.683423 1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
I0717 17:26:12.731655 1 controller.go:615] quota admission added evaluator for: namespaces
I0717 17:26:12.867696 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0717 17:26:13.519087 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0717 17:26:13.524933 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0717 17:26:13.525042 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0717 17:26:14.141166 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0717 17:26:14.190199 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0717 17:26:14.346951 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0717 17:26:14.355637 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.180]
I0717 17:26:14.357063 1 controller.go:615] quota admission added evaluator for: endpoints
I0717 17:26:14.363079 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0717 17:26:14.550932 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0717 17:26:16.299323 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0717 17:26:16.313650 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0717 17:26:16.444752 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0717 17:26:29.574426 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0717 17:26:29.574426 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0717 17:26:29.724582 1 controller.go:615] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [515c5ff9f46dae1a0befd8efb5eb62b1d7d5a8d9ab3d2489e5d77225c2969697] <==
I0717 17:26:29.073403 1 shared_informer.go:320] Caches are synced for ephemeral
I0717 17:26:29.073778 1 shared_informer.go:320] Caches are synced for PVC protection
I0717 17:26:29.126092 1 shared_informer.go:320] Caches are synced for attach detach
I0717 17:26:29.127955 1 shared_informer.go:320] Caches are synced for persistent volume
I0717 17:26:29.172459 1 shared_informer.go:320] Caches are synced for cronjob
I0717 17:26:29.227981 1 shared_informer.go:320] Caches are synced for resource quota
I0717 17:26:29.229561 1 shared_informer.go:320] Caches are synced for resource quota
I0717 17:26:29.645377 1 shared_informer.go:320] Caches are synced for garbage collector
I0717 17:26:29.645518 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0717 17:26:29.676538 1 shared_informer.go:320] Caches are synced for garbage collector
I0717 17:26:30.131742 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="401.168376ms"
I0717 17:26:30.147417 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.609225ms"
I0717 17:26:30.150595 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.178µs"
I0717 17:26:30.156045 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.456µs"
I0717 17:26:46.686080 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.287244ms"
I0717 17:26:46.690107 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.918µs"
I0717 17:26:46.708437 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.561µs"
I0717 17:26:46.721053 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.491µs"
I0717 17:26:47.592898 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.998µs"
I0717 17:26:47.650175 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.942µs"
I0717 17:26:48.607906 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.62659ms"
I0717 17:26:48.608008 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.426µs"
I0717 17:26:48.647797 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.456738ms"
I0717 17:26:48.648394 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.436µs"
I0717 17:26:49.026935 1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
==> kube-proxy [0a2a73f6200a3c41f2559944af1b8896b01ccd3f6fa5ac3a4d66a7ec20085f45] <==
I0717 17:26:30.633390 1 server_linux.go:69] "Using iptables proxy"
I0717 17:26:30.664296 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.180"]
I0717 17:26:30.777855 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0717 17:26:30.777915 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0717 17:26:30.777933 1 server_linux.go:165] "Using iptables Proxier"
I0717 17:26:30.782913 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0717 17:26:30.783727 1 server.go:872] "Version info" version="v1.30.2"
I0717 17:26:30.783743 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0717 17:26:30.785883 1 config.go:192] "Starting service config controller"
I0717 17:26:30.786104 1 shared_informer.go:313] Waiting for caches to sync for service config
I0717 17:26:30.786184 1 config.go:101] "Starting endpoint slice config controller"
I0717 17:26:30.786194 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0717 17:26:30.786196 1 config.go:319] "Starting node config controller"
I0717 17:26:30.786202 1 shared_informer.go:313] Waiting for caches to sync for node config
I0717 17:26:30.886459 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0717 17:26:30.886517 1 shared_informer.go:320] Caches are synced for node config
I0717 17:26:30.886527 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [2f62c96e1a7844ed21d49b39ee23ef0aefd932e9d5a3ac7a78f787779864806c] <==
E0717 17:26:12.612716 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0717 17:26:12.612322 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0717 17:26:12.612328 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0717 17:26:12.612334 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0717 17:26:12.612341 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 17:26:12.612951 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0717 17:26:13.435639 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0717 17:26:13.435693 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0717 17:26:13.453973 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0717 17:26:13.454017 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0717 17:26:13.542464 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0717 17:26:13.542509 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0717 17:26:13.613338 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0717 17:26:13.613487 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0717 17:26:13.619979 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 17:26:13.620074 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0717 17:26:13.625523 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0717 17:26:13.625659 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0717 17:26:13.773180 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0717 17:26:13.773245 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0717 17:26:13.789228 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0717 17:26:13.789279 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0717 17:26:13.882287 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0717 17:26:13.882339 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
I0717 17:26:16.586108 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.605015 1321 topology_manager.go:215] "Topology Admit Handler" podUID="de0fd552-4dd9-4de0-9520-1427e282021b" podNamespace="kube-system" podName="kube-proxy-jlzt5"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.617045 1321 topology_manager.go:215] "Topology Admit Handler" podUID="9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8" podNamespace="kube-system" podName="kindnet-5zksq"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.680457 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de0fd552-4dd9-4de0-9520-1427e282021b-xtables-lock\") pod \"kube-proxy-jlzt5\" (UID: \"de0fd552-4dd9-4de0-9520-1427e282021b\") " pod="kube-system/kube-proxy-jlzt5"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.680611 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8-xtables-lock\") pod \"kindnet-5zksq\" (UID: \"9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8\") " pod="kube-system/kindnet-5zksq"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.680692 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfqb\" (UniqueName: \"kubernetes.io/projected/9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8-kube-api-access-whfqb\") pod \"kindnet-5zksq\" (UID: \"9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8\") " pod="kube-system/kindnet-5zksq"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.680897 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de0fd552-4dd9-4de0-9520-1427e282021b-kube-proxy\") pod \"kube-proxy-jlzt5\" (UID: \"de0fd552-4dd9-4de0-9520-1427e282021b\") " pod="kube-system/kube-proxy-jlzt5"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.681026 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de0fd552-4dd9-4de0-9520-1427e282021b-lib-modules\") pod \"kube-proxy-jlzt5\" (UID: \"de0fd552-4dd9-4de0-9520-1427e282021b\") " pod="kube-system/kube-proxy-jlzt5"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.681158 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtscf\" (UniqueName: \"kubernetes.io/projected/de0fd552-4dd9-4de0-9520-1427e282021b-kube-api-access-xtscf\") pod \"kube-proxy-jlzt5\" (UID: \"de0fd552-4dd9-4de0-9520-1427e282021b\") " pod="kube-system/kube-proxy-jlzt5"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.681280 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8-cni-cfg\") pod \"kindnet-5zksq\" (UID: \"9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8\") " pod="kube-system/kindnet-5zksq"
Jul 17 17:26:29 ha-333994 kubelet[1321]: I0717 17:26:29.681398 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8-lib-modules\") pod \"kindnet-5zksq\" (UID: \"9b72ef3c-dcf4-4ec3-8087-00689ff2d2e8\") " pod="kube-system/kindnet-5zksq"
Jul 17 17:26:36 ha-333994 kubelet[1321]: I0717 17:26:36.547674 1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jlzt5" podStartSLOduration=7.547648694 podStartE2EDuration="7.547648694s" podCreationTimestamp="2024-07-17 17:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 17:26:30.526621262 +0000 UTC m=+14.258990056" watchObservedRunningTime="2024-07-17 17:26:36.547648694 +0000 UTC m=+20.280017488"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.644940 1321 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.681890 1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5zksq" podStartSLOduration=12.762634758 podStartE2EDuration="17.68181331s" podCreationTimestamp="2024-07-17 17:26:29 +0000 UTC" firstStartedPulling="2024-07-17 17:26:30.545986834 +0000 UTC m=+14.278355610" lastFinishedPulling="2024-07-17 17:26:35.465165387 +0000 UTC m=+19.197534162" observedRunningTime="2024-07-17 17:26:36.549949571 +0000 UTC m=+20.282318365" watchObservedRunningTime="2024-07-17 17:26:46.68181331 +0000 UTC m=+30.414182103"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.682086 1321 topology_manager.go:215] "Topology Admit Handler" podUID="40fe2cb3-25ad-4d21-a67c-16752d657439" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sh96r"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.688079 1321 topology_manager.go:215] "Topology Admit Handler" podUID="123c311b-67ed-42b2-ad53-cc59077dfbe7" podNamespace="kube-system" podName="storage-provisioner"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.691343 1321 topology_manager.go:215] "Topology Admit Handler" podUID="29a654a4-f52d-4594-b402-93061221e0e1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n4xtd"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.800136 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40fe2cb3-25ad-4d21-a67c-16752d657439-config-volume\") pod \"coredns-7db6d8ff4d-sh96r\" (UID: \"40fe2cb3-25ad-4d21-a67c-16752d657439\") " pod="kube-system/coredns-7db6d8ff4d-sh96r"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.800186 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29a654a4-f52d-4594-b402-93061221e0e1-config-volume\") pod \"coredns-7db6d8ff4d-n4xtd\" (UID: \"29a654a4-f52d-4594-b402-93061221e0e1\") " pod="kube-system/coredns-7db6d8ff4d-n4xtd"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.800207 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/123c311b-67ed-42b2-ad53-cc59077dfbe7-tmp\") pod \"storage-provisioner\" (UID: \"123c311b-67ed-42b2-ad53-cc59077dfbe7\") " pod="kube-system/storage-provisioner"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.800224 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2fc\" (UniqueName: \"kubernetes.io/projected/40fe2cb3-25ad-4d21-a67c-16752d657439-kube-api-access-kp2fc\") pod \"coredns-7db6d8ff4d-sh96r\" (UID: \"40fe2cb3-25ad-4d21-a67c-16752d657439\") " pod="kube-system/coredns-7db6d8ff4d-sh96r"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.800250 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88sv\" (UniqueName: \"kubernetes.io/projected/29a654a4-f52d-4594-b402-93061221e0e1-kube-api-access-d88sv\") pod \"coredns-7db6d8ff4d-n4xtd\" (UID: \"29a654a4-f52d-4594-b402-93061221e0e1\") " pod="kube-system/coredns-7db6d8ff4d-n4xtd"
Jul 17 17:26:46 ha-333994 kubelet[1321]: I0717 17:26:46.800268 1321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9pr\" (UniqueName: \"kubernetes.io/projected/123c311b-67ed-42b2-ad53-cc59077dfbe7-kube-api-access-wq9pr\") pod \"storage-provisioner\" (UID: \"123c311b-67ed-42b2-ad53-cc59077dfbe7\") " pod="kube-system/storage-provisioner"
Jul 17 17:26:47 ha-333994 kubelet[1321]: I0717 17:26:47.624955 1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n4xtd" podStartSLOduration=17.6249316 podStartE2EDuration="17.6249316s" podCreationTimestamp="2024-07-17 17:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 17:26:47.590306238 +0000 UTC m=+31.322675033" watchObservedRunningTime="2024-07-17 17:26:47.6249316 +0000 UTC m=+31.357300406"
Jul 17 17:26:47 ha-333994 kubelet[1321]: I0717 17:26:47.647670 1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.647650055 podStartE2EDuration="17.647650055s" podCreationTimestamp="2024-07-17 17:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 17:26:47.625496174 +0000 UTC m=+31.357864970" watchObservedRunningTime="2024-07-17 17:26:47.647650055 +0000 UTC m=+31.380018850"
Jul 17 17:26:48 ha-333994 kubelet[1321]: I0717 17:26:48.594167 1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sh96r" podStartSLOduration=18.594150349 podStartE2EDuration="18.594150349s" podCreationTimestamp="2024-07-17 17:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 17:26:47.650892639 +0000 UTC m=+31.383261416" watchObservedRunningTime="2024-07-17 17:26:48.594150349 +0000 UTC m=+32.326519140"
==> storage-provisioner [86b483ab22e1a88b745f12d55b1fa66f91f47882547e5407707e50180e29df21] <==
I0717 17:26:47.481175 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0717 17:26:47.495592 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0717 17:26:47.495817 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0717 17:26:47.507492 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0717 17:26:47.511210 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-333994_6bfaee24-69b3-4179-b0c0-9965e95a63d8!
I0717 17:26:47.516960 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a33d6ef-207d-4ea5-bcad-ac569127b889", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-333994_6bfaee24-69b3-4179-b0c0-9965e95a63d8 became leader
I0717 17:26:47.611924 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-333994_6bfaee24-69b3-4179-b0c0-9965e95a63d8!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-333994 -n ha-333994
helpers_test.go:261: (dbg) Run: kubectl --context ha-333994 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (98.73s)