=== RUN TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run: out/minikube-linux-amd64 start -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2
E1227 08:55:12.704577 9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : exit status 80 (1m28.080234182s)
-- stdout --
* [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=22344
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
* Found network options:
- NO_PROXY=192.168.39.24
- env NO_PROXY=192.168.39.24
-- /stdout --
** stderr **
I1227 08:54:37.348894 24108 out.go:360] Setting OutFile to fd 1 ...
I1227 08:54:37.349196 24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:54:37.349207 24108 out.go:374] Setting ErrFile to fd 2...
I1227 08:54:37.349214 24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:54:37.349401 24108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:54:37.349901 24108 out.go:368] Setting JSON to false
I1227 08:54:37.350702 24108 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2227,"bootTime":1766823450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1227 08:54:37.350761 24108 start.go:143] virtualization: kvm guest
I1227 08:54:37.352914 24108 out.go:179] * [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1227 08:54:37.354122 24108 notify.go:221] Checking for updates...
I1227 08:54:37.354140 24108 out.go:179] - MINIKUBE_LOCATION=22344
I1227 08:54:37.355599 24108 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 08:54:37.356985 24108 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
I1227 08:54:37.358228 24108 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:54:37.359373 24108 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1227 08:54:37.360648 24108 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 08:54:37.362069 24108 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 08:54:37.398292 24108 out.go:179] * Using the kvm2 driver based on user configuration
I1227 08:54:37.399595 24108 start.go:309] selected driver: kvm2
I1227 08:54:37.399614 24108 start.go:928] validating driver "kvm2" against <nil>
I1227 08:54:37.399634 24108 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 08:54:37.400332 24108 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 08:54:37.400590 24108 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 08:54:37.400626 24108 cni.go:84] Creating CNI manager for ""
I1227 08:54:37.400682 24108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I1227 08:54:37.400692 24108 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1227 08:54:37.400744 24108 start.go:353] cluster config:
{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 08:54:37.400897 24108 iso.go:125] acquiring lock: {Name:mkf3af0a60e6ccee2eeb813de50903ed5d7e8922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 08:54:37.402631 24108 out.go:179] * Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
I1227 08:54:37.403816 24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:54:37.403844 24108 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
I1227 08:54:37.403854 24108 cache.go:65] Caching tarball of preloaded images
I1227 08:54:37.403951 24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1227 08:54:37.403967 24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 08:54:37.404346 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:54:37.404374 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json: {Name:mk5e07ed738ae868a23976588c175a8cb2b30a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:54:37.404563 24108 start.go:360] acquireMachinesLock for multinode-899276: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1227 08:54:37.404598 24108 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "multinode-899276"
I1227 08:54:37.404622 24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 08:54:37.404675 24108 start.go:125] createHost starting for "" (driver="kvm2")
I1227 08:54:37.407102 24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I1227 08:54:37.407274 24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
I1227 08:54:37.407306 24108 client.go:173] LocalClient.Create starting
I1227 08:54:37.407365 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
I1227 08:54:37.407409 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:54:37.407425 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:54:37.407478 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
I1227 08:54:37.407496 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:54:37.407507 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:54:37.407806 24108 main.go:144] libmachine: creating domain...
I1227 08:54:37.407817 24108 main.go:144] libmachine: creating network...
I1227 08:54:37.409512 24108 main.go:144] libmachine: found existing default network
I1227 08:54:37.409702 24108 main.go:144] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1227 08:54:37.410292 24108 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001caea70}
I1227 08:54:37.410380 24108 main.go:144] libmachine: defining private network:
<network>
<name>mk-multinode-899276</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1227 08:54:37.416200 24108 main.go:144] libmachine: creating private network mk-multinode-899276 192.168.39.0/24...
I1227 08:54:37.484690 24108 main.go:144] libmachine: private network mk-multinode-899276 192.168.39.0/24 created
I1227 08:54:37.484994 24108 main.go:144] libmachine: <network>
<name>mk-multinode-899276</name>
<uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:7e:96:0f'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1227 08:54:37.485088 24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
I1227 08:54:37.485112 24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
I1227 08:54:37.485123 24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:54:37.485174 24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
I1227 08:54:37.708878 24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa...
I1227 08:54:37.789981 24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk...
I1227 08:54:37.790024 24108 main.go:144] libmachine: Writing magic tar header
I1227 08:54:37.790040 24108 main.go:144] libmachine: Writing SSH key tar header
I1227 08:54:37.790127 24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
I1227 08:54:37.790183 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276
I1227 08:54:37.790204 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 (perms=drwx------)
I1227 08:54:37.790215 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
I1227 08:54:37.790225 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
I1227 08:54:37.790238 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:54:37.790249 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
I1227 08:54:37.790257 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
I1227 08:54:37.790265 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
I1227 08:54:37.790275 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1227 08:54:37.790287 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1227 08:54:37.790303 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
I1227 08:54:37.790313 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1227 08:54:37.790321 24108 main.go:144] libmachine: checking permissions on dir: /home
I1227 08:54:37.790330 24108 main.go:144] libmachine: skipping /home - not owner
I1227 08:54:37.790334 24108 main.go:144] libmachine: defining domain...
I1227 08:54:37.792061 24108 main.go:144] libmachine: defining domain using XML:
<domain type='kvm'>
<name>multinode-899276</name>
<memory unit='MiB'>3072</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1227 08:54:37.797217 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:e2:49:84 in network default
I1227 08:54:37.797913 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:37.797931 24108 main.go:144] libmachine: starting domain...
I1227 08:54:37.797936 24108 main.go:144] libmachine: ensuring networks are active...
I1227 08:54:37.798746 24108 main.go:144] libmachine: Ensuring network default is active
I1227 08:54:37.799132 24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
I1227 08:54:37.799776 24108 main.go:144] libmachine: getting domain XML...
I1227 08:54:37.800794 24108 main.go:144] libmachine: starting domain XML:
<domain type='kvm'>
<name>multinode-899276</name>
<uuid>6d370929-9382-4953-8ba6-4fb6eca3e648</uuid>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:4c:5c:b4'/>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:e2:49:84'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1227 08:54:39.079279 24108 main.go:144] libmachine: waiting for domain to start...
I1227 08:54:39.080610 24108 main.go:144] libmachine: domain is now running
I1227 08:54:39.080624 24108 main.go:144] libmachine: waiting for IP...
I1227 08:54:39.081451 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.082023 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.082037 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.082336 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:39.082377 24108 retry.go:84] will retry after 200ms: waiting for domain to come up
I1227 08:54:39.326020 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.326723 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.326741 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.327098 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:39.575768 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.576511 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.576534 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.576883 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:39.876331 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.877091 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.877107 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.877413 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:40.370368 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:40.371069 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:40.371086 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:40.371431 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:40.865483 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:40.866211 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:40.866236 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:40.866603 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:41.484623 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:41.485260 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:41.485279 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:41.485638 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:42.393849 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:42.394445 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:42.394463 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:42.394914 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:43.319225 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:43.320003 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:43.320020 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:43.320334 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:44.724122 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:44.724874 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:44.724891 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:44.725237 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:46.322345 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:46.323107 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:46.323130 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:46.323457 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:48.157422 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:48.158091 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:48.158110 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:48.158455 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:51.501875 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:51.502515 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:51.502530 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:51.502791 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:51.502830 24108 retry.go:84] will retry after 4.3s: waiting for domain to come up
I1227 08:54:55.837835 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:55.838577 24108 main.go:144] libmachine: domain multinode-899276 has current primary IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:55.838596 24108 main.go:144] libmachine: found domain IP: 192.168.39.24
I1227 08:54:55.838605 24108 main.go:144] libmachine: reserving static IP address...
I1227 08:54:55.839242 24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276", mac: "52:54:00:4c:5c:b4", ip: "192.168.39.24"} in network mk-multinode-899276
I1227 08:54:56.025597 24108 main.go:144] libmachine: reserved static IP address 192.168.39.24 for domain multinode-899276
I1227 08:54:56.025623 24108 main.go:144] libmachine: waiting for SSH...
I1227 08:54:56.025631 24108 main.go:144] libmachine: Getting to WaitForSSH function...
I1227 08:54:56.028518 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.029028 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.029077 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.029273 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.029482 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.029494 24108 main.go:144] libmachine: About to run SSH command:
exit 0
I1227 08:54:56.143804 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:54:56.144248 24108 main.go:144] libmachine: domain creation complete
I1227 08:54:56.146013 24108 machine.go:94] provisionDockerMachine start ...
I1227 08:54:56.148712 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.149157 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.149206 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.149383 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.149565 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.149574 24108 main.go:144] libmachine: About to run SSH command:
hostname
I1227 08:54:56.263810 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
I1227 08:54:56.263841 24108 buildroot.go:166] provisioning hostname "multinode-899276"
I1227 08:54:56.266910 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.267410 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.267435 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.267640 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.267847 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.267858 24108 main.go:144] libmachine: About to run SSH command:
sudo hostname multinode-899276 && echo "multinode-899276" | sudo tee /etc/hostname
I1227 08:54:56.401325 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276
I1227 08:54:56.404664 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.405235 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.405263 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.405433 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.405644 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.405659 24108 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-899276' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276/g' /etc/hosts;
else
echo '127.0.1.1 multinode-899276' | sudo tee -a /etc/hosts;
fi
fi
I1227 08:54:56.543193 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:54:56.543230 24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
I1227 08:54:56.543264 24108 buildroot.go:174] setting up certificates
I1227 08:54:56.543282 24108 provision.go:84] configureAuth start
I1227 08:54:56.546171 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.546588 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.546612 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.548760 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.549114 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.549136 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.549243 24108 provision.go:143] copyHostCerts
I1227 08:54:56.549266 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:54:56.549290 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
I1227 08:54:56.549298 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:54:56.549370 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
I1227 08:54:56.549490 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:54:56.549516 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
I1227 08:54:56.549522 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:54:56.549548 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
I1227 08:54:56.549593 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:54:56.549609 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
I1227 08:54:56.549615 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:54:56.549634 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
I1227 08:54:56.549680 24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-899276]
I1227 08:54:56.564952 24108 provision.go:177] copyRemoteCerts
I1227 08:54:56.565003 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 08:54:56.567240 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.567643 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.567677 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.567850 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:56.656198 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 08:54:56.656292 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 08:54:56.685216 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 08:54:56.685304 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1227 08:54:56.714733 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 08:54:56.714819 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 08:54:56.743305 24108 provision.go:87] duration metric: took 199.989326ms to configureAuth
I1227 08:54:56.743338 24108 buildroot.go:189] setting minikube options for container-runtime
I1227 08:54:56.743528 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:54:56.746235 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.746587 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.746606 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.746782 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.747027 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.747039 24108 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 08:54:56.861225 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
I1227 08:54:56.861255 24108 buildroot.go:70] root file system type: tmpfs
I1227 08:54:56.861417 24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 08:54:56.864305 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.864731 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.864767 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.864925 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.865130 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.865170 24108 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 08:54:56.996399 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 08:54:56.999444 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.999882 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.999912 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:57.000156 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:57.000379 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:57.000396 24108 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 08:54:57.924795 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
I1227 08:54:57.924823 24108 machine.go:97] duration metric: took 1.778786884s to provisionDockerMachine
I1227 08:54:57.924839 24108 client.go:176] duration metric: took 20.517522558s to LocalClient.Create
I1227 08:54:57.924853 24108 start.go:167] duration metric: took 20.517578026s to libmachine.API.Create "multinode-899276"
I1227 08:54:57.924862 24108 start.go:293] postStartSetup for "multinode-899276" (driver="kvm2")
I1227 08:54:57.924874 24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 08:54:57.924962 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 08:54:57.927733 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:57.928188 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:57.928219 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:57.928364 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:58.017094 24108 ssh_runner.go:195] Run: cat /etc/os-release
I1227 08:54:58.021892 24108 info.go:137] Remote host: Buildroot 2025.02
I1227 08:54:58.021927 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
I1227 08:54:58.022001 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
I1227 08:54:58.022108 24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
I1227 08:54:58.022115 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
I1227 08:54:58.022194 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 08:54:58.035018 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:54:58.064746 24108 start.go:296] duration metric: took 139.872084ms for postStartSetup
I1227 08:54:58.067860 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.068279 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.068306 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.068579 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:54:58.068756 24108 start.go:128] duration metric: took 20.664071028s to createHost
I1227 08:54:58.071566 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.072015 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.072040 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.072244 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:58.072473 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:58.072488 24108 main.go:144] libmachine: About to run SSH command:
date +%s.%N
I1227 08:54:58.187322 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825698.156416973
I1227 08:54:58.187344 24108 fix.go:216] guest clock: 1766825698.156416973
I1227 08:54:58.187351 24108 fix.go:229] Guest: 2025-12-27 08:54:58.156416973 +0000 UTC Remote: 2025-12-27 08:54:58.068766977 +0000 UTC m=+20.766440443 (delta=87.649996ms)
I1227 08:54:58.187367 24108 fix.go:200] guest clock delta is within tolerance: 87.649996ms
I1227 08:54:58.187371 24108 start.go:83] releasing machines lock for "multinode-899276", held for 20.782762567s
I1227 08:54:58.189878 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.190311 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.190336 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.190848 24108 ssh_runner.go:195] Run: cat /version.json
I1227 08:54:58.190934 24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 08:54:58.193909 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.193920 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.194367 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.194393 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.194412 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.194445 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.194571 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:58.194749 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:58.303202 24108 ssh_runner.go:195] Run: systemctl --version
I1227 08:54:58.309380 24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 08:54:58.315530 24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 08:54:58.315591 24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 08:54:58.335551 24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1227 08:54:58.335587 24108 start.go:496] detecting cgroup driver to use...
I1227 08:54:58.335615 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:54:58.335736 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:54:58.357443 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 08:54:58.369407 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 08:54:58.384702 24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 08:54:58.384807 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 08:54:58.399640 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:54:58.412464 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 08:54:58.424691 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:54:58.437707 24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 08:54:58.450402 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 08:54:58.462916 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 08:54:58.475650 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 08:54:58.493530 24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 08:54:58.504139 24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1227 08:54:58.504192 24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1227 08:54:58.516423 24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 08:54:58.528272 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:54:58.673716 24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 08:54:58.720867 24108 start.go:496] detecting cgroup driver to use...
I1227 08:54:58.720909 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:54:58.720972 24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 08:54:58.744526 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:54:58.764985 24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 08:54:58.785879 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:54:58.803205 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:54:58.821885 24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 08:54:58.856773 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:54:58.873676 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:54:58.896773 24108 ssh_runner.go:195] Run: which cri-dockerd
I1227 08:54:58.901095 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 08:54:58.912977 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 08:54:58.935679 24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 08:54:59.087073 24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 08:54:59.235233 24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 08:54:59.235368 24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 08:54:59.257291 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 08:54:59.273342 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:54:59.413736 24108 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 08:54:59.868087 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 08:54:59.883321 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 08:54:59.898581 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:54:59.913286 24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 08:55:00.062974 24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 08:55:00.214186 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:00.363957 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 08:55:00.400471 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 08:55:00.416741 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:00.560590 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 08:55:00.668182 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:55:00.687244 24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 08:55:00.687326 24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 08:55:00.693883 24108 start.go:574] Will wait 60s for crictl version
I1227 08:55:00.693968 24108 ssh_runner.go:195] Run: which crictl
I1227 08:55:00.698083 24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1227 08:55:00.732884 24108 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.2
RuntimeApiVersion: v1
I1227 08:55:00.732961 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:55:00.764467 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:55:00.793639 24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
I1227 08:55:00.796490 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:00.796890 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:55:00.796916 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:00.797129 24108 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1227 08:55:00.801979 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:55:00.819694 24108 kubeadm.go:884] updating cluster {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 08:55:00.819800 24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:55:00.819853 24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 08:55:00.841928 24108 docker.go:694] Got preloaded images:
I1227 08:55:00.841951 24108 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0 wasn't preloaded
I1227 08:55:00.841997 24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1227 08:55:00.855548 24108 ssh_runner.go:195] Run: which lz4
I1227 08:55:00.860486 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I1227 08:55:00.860594 24108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1227 08:55:00.865387 24108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1227 08:55:00.865417 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284632523 bytes)
I1227 08:55:01.961740 24108 docker.go:658] duration metric: took 1.101175277s to copy over tarball
I1227 08:55:01.961831 24108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1227 08:55:03.184079 24108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.222186343s)
I1227 08:55:03.184117 24108 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1227 08:55:03.216811 24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1227 08:55:03.229331 24108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
I1227 08:55:03.250420 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 08:55:03.266159 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:03.414345 24108 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 08:55:05.441484 24108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.027089175s)
I1227 08:55:05.441602 24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 08:55:05.460483 24108 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 08:55:05.460508 24108 cache_images.go:86] Images are preloaded, skipping loading
I1227 08:55:05.460517 24108 kubeadm.go:935] updating node { 192.168.39.24 8443 v1.35.0 docker true true} ...
I1227 08:55:05.460610 24108 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 08:55:05.460667 24108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1227 08:55:05.512991 24108 cni.go:84] Creating CNI manager for ""
I1227 08:55:05.513022 24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I1227 08:55:05.513043 24108 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 08:55:05.513080 24108 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899276 NodeName:multinode-899276 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 08:55:05.513228 24108 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.24
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-899276"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.24"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 08:55:05.513292 24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 08:55:05.525546 24108 binaries.go:51] Found k8s binaries, skipping transfer
I1227 08:55:05.525616 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 08:55:05.537237 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
I1227 08:55:05.557993 24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 08:55:05.579343 24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
I1227 08:55:05.600550 24108 ssh_runner.go:195] Run: grep 192.168.39.24 control-plane.minikube.internal$ /etc/hosts
I1227 08:55:05.605151 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:55:05.620984 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:05.769960 24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 08:55:05.800659 24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.24
I1227 08:55:05.800681 24108 certs.go:195] generating shared ca certs ...
I1227 08:55:05.800706 24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.800879 24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
I1227 08:55:05.800934 24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
I1227 08:55:05.800949 24108 certs.go:257] generating profile certs ...
I1227 08:55:05.801012 24108 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key
I1227 08:55:05.801071 24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt with IP's: []
I1227 08:55:05.940834 24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt ...
I1227 08:55:05.940874 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt: {Name:mk02178aca7f56d432d5f5e37ab494f5434cad17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.941124 24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key ...
I1227 08:55:05.941147 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key: {Name:mk6471e99270ac274eb8d161834a8e74a99ce837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.941271 24108 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d
I1227 08:55:05.941294 24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
I1227 08:55:05.986153 24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d ...
I1227 08:55:05.986188 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d: {Name:mk802401bb34f0577b94f18188268edd10cab228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.986405 24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d ...
I1227 08:55:05.986426 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d: {Name:mk499be31979f3e860f435493b7a49f6c8a77f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.986541 24108 certs.go:382] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt
I1227 08:55:05.986669 24108 certs.go:386] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key
I1227 08:55:05.986770 24108 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key
I1227 08:55:05.986801 24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt with IP's: []
I1227 08:55:06.117402 24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt ...
I1227 08:55:06.117436 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt: {Name:mkff498d36179d0686c029b1a0d2c2aa28970730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:06.117638 24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key ...
I1227 08:55:06.117659 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key: {Name:mkae01040e0a5553a361620eb1dc3658cbd20bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:06.117774 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 08:55:06.117805 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 08:55:06.117825 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 08:55:06.117845 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 08:55:06.117861 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 08:55:06.117875 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 08:55:06.117888 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 08:55:06.117906 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 08:55:06.117969 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
W1227 08:55:06.118021 24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
I1227 08:55:06.118034 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
I1227 08:55:06.118087 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
I1227 08:55:06.118141 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
I1227 08:55:06.118179 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
I1227 08:55:06.118236 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:55:06.118294 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.118318 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
I1227 08:55:06.118337 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
I1227 08:55:06.118857 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 08:55:06.150178 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1227 08:55:06.179223 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 08:55:06.208476 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 08:55:06.239094 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1227 08:55:06.268368 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 08:55:06.297730 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 08:55:06.326802 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 08:55:06.357205 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 08:55:06.387582 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
I1227 08:55:06.417521 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
I1227 08:55:06.449486 24108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 08:55:06.473842 24108 ssh_runner.go:195] Run: openssl version
I1227 08:55:06.481673 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
I1227 08:55:06.494727 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
I1227 08:55:06.506605 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
I1227 08:55:06.511904 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
I1227 08:55:06.511979 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
I1227 08:55:06.522748 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 08:55:06.535114 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
I1227 08:55:06.546799 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.558007 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 08:55:06.569782 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.575189 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.575271 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.582359 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 08:55:06.594977 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 08:55:06.606187 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
I1227 08:55:06.617464 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
I1227 08:55:06.628478 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
I1227 08:55:06.633627 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
I1227 08:55:06.633684 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
I1227 08:55:06.640779 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 08:55:06.652579 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
I1227 08:55:06.663960 24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 08:55:06.668886 24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 08:55:06.668953 24108 kubeadm.go:401] StartCluster: {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 08:55:06.669105 24108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1227 08:55:06.684838 24108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 08:55:06.696256 24108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 08:55:06.708324 24108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 08:55:06.720681 24108 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 08:55:06.720728 24108 kubeadm.go:158] found existing configuration files:
I1227 08:55:06.720787 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 08:55:06.731330 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 08:55:06.731392 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 08:55:06.744324 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 08:55:06.754995 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 08:55:06.755091 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 08:55:06.767513 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 08:55:06.778490 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 08:55:06.778576 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 08:55:06.789929 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 08:55:06.800709 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 08:55:06.800794 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 08:55:06.812666 24108 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1227 08:55:07.024456 24108 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 08:55:15.975818 24108 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 08:55:15.975905 24108 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 08:55:15.976023 24108 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 08:55:15.976153 24108 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 08:55:15.976280 24108 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 08:55:15.976375 24108 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 08:55:15.977966 24108 out.go:252] - Generating certificates and keys ...
I1227 08:55:15.978092 24108 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 08:55:15.978154 24108 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 08:55:15.978227 24108 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 08:55:15.978279 24108 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 08:55:15.978354 24108 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 08:55:15.978437 24108 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 08:55:15.978507 24108 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 08:55:15.978652 24108 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
I1227 08:55:15.978708 24108 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 08:55:15.978817 24108 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
I1227 08:55:15.978879 24108 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 08:55:15.978934 24108 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 08:55:15.979025 24108 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 08:55:15.979124 24108 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 08:55:15.979189 24108 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 08:55:15.979284 24108 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 08:55:15.979376 24108 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 08:55:15.979463 24108 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 08:55:15.979528 24108 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 08:55:15.979667 24108 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 08:55:15.979731 24108 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 08:55:15.981818 24108 out.go:252] - Booting up control plane ...
I1227 08:55:15.981903 24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 08:55:15.981981 24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 08:55:15.982067 24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 08:55:15.982163 24108 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 08:55:15.982243 24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 08:55:15.982343 24108 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 08:55:15.982416 24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 08:55:15.982468 24108 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 08:55:15.982635 24108 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 08:55:15.982810 24108 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 08:55:15.982906 24108 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001479517s
I1227 08:55:15.983060 24108 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1227 08:55:15.983187 24108 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
I1227 08:55:15.983294 24108 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1227 08:55:15.983366 24108 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1227 08:55:15.983434 24108 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508222077s
I1227 08:55:15.983490 24108 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.795811505s
I1227 08:55:15.983547 24108 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00280761s
I1227 08:55:15.983634 24108 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1227 08:55:15.983743 24108 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1227 08:55:15.983806 24108 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1227 08:55:15.983962 24108 kubeadm.go:319] [mark-control-plane] Marking the node multinode-899276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1227 08:55:15.984029 24108 kubeadm.go:319] [bootstrap-token] Using token: 8gubmu.jzeht1x7riked3vp
I1227 08:55:15.985339 24108 out.go:252] - Configuring RBAC rules ...
I1227 08:55:15.985468 24108 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1227 08:55:15.985590 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1227 08:55:15.985836 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1227 08:55:15.985963 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1227 08:55:15.986071 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1227 08:55:15.986140 24108 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1227 08:55:15.986233 24108 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1227 08:55:15.986269 24108 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1227 08:55:15.986315 24108 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1227 08:55:15.986323 24108 kubeadm.go:319]
I1227 08:55:15.986381 24108 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1227 08:55:15.986390 24108 kubeadm.go:319]
I1227 08:55:15.986465 24108 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1227 08:55:15.986474 24108 kubeadm.go:319]
I1227 08:55:15.986507 24108 kubeadm.go:319] mkdir -p $HOME/.kube
I1227 08:55:15.986576 24108 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1227 08:55:15.986650 24108 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1227 08:55:15.986662 24108 kubeadm.go:319]
I1227 08:55:15.986752 24108 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1227 08:55:15.986762 24108 kubeadm.go:319]
I1227 08:55:15.986803 24108 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1227 08:55:15.986808 24108 kubeadm.go:319]
I1227 08:55:15.986860 24108 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1227 08:55:15.986924 24108 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1227 08:55:15.986987 24108 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1227 08:55:15.986995 24108 kubeadm.go:319]
I1227 08:55:15.987083 24108 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1227 08:55:15.987152 24108 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1227 08:55:15.987157 24108 kubeadm.go:319]
I1227 08:55:15.987230 24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
I1227 08:55:15.987318 24108 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c \
I1227 08:55:15.987337 24108 kubeadm.go:319] --control-plane
I1227 08:55:15.987343 24108 kubeadm.go:319]
I1227 08:55:15.987420 24108 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1227 08:55:15.987428 24108 kubeadm.go:319]
I1227 08:55:15.987499 24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
I1227 08:55:15.987622 24108 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c
I1227 08:55:15.987640 24108 cni.go:84] Creating CNI manager for ""
I1227 08:55:15.987649 24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I1227 08:55:15.989869 24108 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1227 08:55:15.990980 24108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1227 08:55:15.997094 24108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I1227 08:55:15.997119 24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I1227 08:55:16.018807 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1227 08:55:16.327079 24108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1227 08:55:16.327141 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:16.327146 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276 minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=true
I1227 08:55:16.365159 24108 ops.go:34] apiserver oom_adj: -16
I1227 08:55:16.465863 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:16.966866 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:17.466570 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:17.966578 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:18.466519 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:18.966943 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:19.466148 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:19.966252 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:20.466874 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:20.559551 24108 kubeadm.go:1114] duration metric: took 4.232470194s to wait for elevateKubeSystemPrivileges
I1227 08:55:20.559594 24108 kubeadm.go:403] duration metric: took 13.890642839s to StartCluster
I1227 08:55:20.559615 24108 settings.go:142] acquiring lock: {Name:mk44fcba3019847ba7794682dc7fa5d4c6839e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:20.559700 24108 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22344-5516/kubeconfig
I1227 08:55:20.560349 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/kubeconfig: {Name:mk9f130990d4b2bd0dfe5788b549d55d90047007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:20.560606 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1227 08:55:20.560624 24108 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1227 08:55:20.560698 24108 addons.go:70] Setting storage-provisioner=true in profile "multinode-899276"
I1227 08:55:20.560599 24108 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 08:55:20.560734 24108 addons.go:70] Setting default-storageclass=true in profile "multinode-899276"
I1227 08:55:20.560754 24108 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "multinode-899276"
I1227 08:55:20.560889 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:55:20.560722 24108 addons.go:239] Setting addon storage-provisioner=true in "multinode-899276"
I1227 08:55:20.560976 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:55:20.563353 24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1227 08:55:20.563858 24108 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1227 08:55:20.563881 24108 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1227 08:55:20.563887 24108 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1227 08:55:20.563895 24108 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
I1227 08:55:20.563910 24108 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
I1227 08:55:20.563922 24108 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
I1227 08:55:20.563927 24108 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
I1227 08:55:20.564267 24108 addons.go:239] Setting addon default-storageclass=true in "multinode-899276"
I1227 08:55:20.564296 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:55:20.566001 24108 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1227 08:55:20.566022 24108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1227 08:55:20.566660 24108 out.go:179] * Verifying Kubernetes components...
I1227 08:55:20.566668 24108 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1227 08:55:20.568005 24108 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1227 08:55:20.568024 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:20.568027 24108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1227 08:55:20.568764 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.569218 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:55:20.569253 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.569506 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:55:20.570678 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.571119 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:55:20.571146 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.571271 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:55:20.721800 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1227 08:55:20.853268 24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 08:55:21.022237 24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1227 08:55:21.022257 24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1227 08:55:21.456081 24108 start.go:987] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1227 08:55:21.456682 24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1227 08:55:21.456749 24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1227 08:55:21.457033 24108 node_ready.go:35] waiting up to 6m0s for node "multinode-899276" to be "Ready" ...
I1227 08:55:21.828507 24108 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1227 08:55:21.829821 24108 addons.go:530] duration metric: took 1.269198648s for enable addons: enabled=[storage-provisioner default-storageclass]
I1227 08:55:21.962140 24108 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-899276" context rescaled to 1 replicas
W1227 08:55:23.460520 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:25.461678 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:27.960886 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:30.459943 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:32.460468 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:34.460900 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:36.960939 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:39.460258 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
I1227 08:55:40.960160 24108 node_ready.go:49] node "multinode-899276" is "Ready"
I1227 08:55:40.960196 24108 node_ready.go:38] duration metric: took 19.503123053s for node "multinode-899276" to be "Ready" ...
I1227 08:55:40.960216 24108 api_server.go:52] waiting for apiserver process to appear ...
I1227 08:55:40.960272 24108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1227 08:55:40.980487 24108 api_server.go:72] duration metric: took 20.419735752s to wait for apiserver process to appear ...
I1227 08:55:40.980522 24108 api_server.go:88] waiting for apiserver healthz status ...
I1227 08:55:40.980545 24108 api_server.go:299] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
I1227 08:55:40.985397 24108 api_server.go:325] https://192.168.39.24:8443/healthz returned 200:
ok
I1227 08:55:40.986902 24108 api_server.go:141] control plane version: v1.35.0
I1227 08:55:40.986929 24108 api_server.go:131] duration metric: took 6.398762ms to wait for apiserver health ...
I1227 08:55:40.986938 24108 system_pods.go:43] waiting for kube-system pods to appear ...
I1227 08:55:40.990608 24108 system_pods.go:59] 8 kube-system pods found
I1227 08:55:40.990654 24108 system_pods.go:61] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:40.990664 24108 system_pods.go:61] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:40.990674 24108 system_pods.go:61] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:40.990682 24108 system_pods.go:61] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:40.990688 24108 system_pods.go:61] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:40.990698 24108 system_pods.go:61] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:40.990703 24108 system_pods.go:61] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:40.990715 24108 system_pods.go:61] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:40.990723 24108 system_pods.go:74] duration metric: took 3.778634ms to wait for pod list to return data ...
I1227 08:55:40.990733 24108 default_sa.go:34] waiting for default service account to be created ...
I1227 08:55:40.993709 24108 default_sa.go:45] found service account: "default"
I1227 08:55:40.993729 24108 default_sa.go:55] duration metric: took 2.988456ms for default service account to be created ...
I1227 08:55:40.993736 24108 system_pods.go:116] waiting for k8s-apps to be running ...
I1227 08:55:40.996625 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:40.996661 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:40.996672 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:40.996683 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:40.996690 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:40.996698 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:40.996709 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:40.996716 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:40.996727 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:40.996757 24108 retry.go:84] will retry after 200ms: missing components: kube-dns
I1227 08:55:41.222991 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:41.223041 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:41.223072 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:41.223082 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:41.223088 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:41.223095 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:41.223101 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:41.223107 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:41.223115 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:41.595420 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:41.595456 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:41.595463 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:41.595468 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:41.595472 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:41.595476 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:41.595479 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:41.595482 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:41.595487 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:41.921377 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:41.921417 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Running
I1227 08:55:41.921426 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:41.921432 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:41.921437 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:41.921443 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:41.921448 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:41.921453 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:41.921458 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Running
I1227 08:55:41.921468 24108 system_pods.go:126] duration metric: took 927.725772ms to wait for k8s-apps to be running ...
I1227 08:55:41.921482 24108 system_svc.go:44] waiting for kubelet service to be running ....
I1227 08:55:41.921538 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 08:55:41.943521 24108 system_svc.go:56] duration metric: took 22.03282ms WaitForService to wait for kubelet
I1227 08:55:41.943547 24108 kubeadm.go:587] duration metric: took 21.382801319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 08:55:41.943563 24108 node_conditions.go:102] verifying NodePressure condition ...
I1227 08:55:41.946923 24108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1227 08:55:41.946949 24108 node_conditions.go:123] node cpu capacity is 2
I1227 08:55:41.946964 24108 node_conditions.go:105] duration metric: took 3.396847ms to run NodePressure ...
I1227 08:55:41.946975 24108 start.go:242] waiting for startup goroutines ...
I1227 08:55:41.946982 24108 start.go:247] waiting for cluster config update ...
I1227 08:55:41.946995 24108 start.go:256] writing updated cluster config ...
I1227 08:55:41.949394 24108 out.go:203]
I1227 08:55:41.951062 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:55:41.951143 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:55:41.952889 24108 out.go:179] * Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
I1227 08:55:41.954248 24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:55:41.954267 24108 cache.go:65] Caching tarball of preloaded images
I1227 08:55:41.954391 24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1227 08:55:41.954406 24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 08:55:41.954483 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:55:41.954681 24108 start.go:360] acquireMachinesLock for multinode-899276-m02: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1227 08:55:41.954734 24108 start.go:364] duration metric: took 30.88µs to acquireMachinesLock for "multinode-899276-m02"
I1227 08:55:41.954766 24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
I1227 08:55:41.954827 24108 start.go:125] createHost starting for "m02" (driver="kvm2")
I1227 08:55:41.956569 24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I1227 08:55:41.956662 24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
I1227 08:55:41.956692 24108 client.go:173] LocalClient.Create starting
I1227 08:55:41.956761 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
I1227 08:55:41.956803 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:55:41.956824 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:55:41.956873 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
I1227 08:55:41.956892 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:55:41.956910 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:55:41.957088 24108 main.go:144] libmachine: creating domain...
I1227 08:55:41.957098 24108 main.go:144] libmachine: creating network...
I1227 08:55:41.958253 24108 main.go:144] libmachine: found existing default network
I1227 08:55:41.958505 24108 main.go:144] libmachine: <network connections='1'>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1227 08:55:41.958687 24108 main.go:144] libmachine: found existing mk-multinode-899276 private network, skipping creation
I1227 08:55:41.958885 24108 main.go:144] libmachine: <network>
<name>mk-multinode-899276</name>
<uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:7e:96:0f'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
<host mac='52:54:00:4c:5c:b4' name='multinode-899276' ip='192.168.39.24'/>
</dhcp>
</ip>
</network>
I1227 08:55:41.959076 24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
I1227 08:55:41.959099 24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
I1227 08:55:41.959107 24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:55:41.959186 24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
I1227 08:55:42.180540 24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa...
I1227 08:55:42.254861 24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk...
I1227 08:55:42.254917 24108 main.go:144] libmachine: Writing magic tar header
I1227 08:55:42.254943 24108 main.go:144] libmachine: Writing SSH key tar header
I1227 08:55:42.255061 24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
I1227 08:55:42.255137 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02
I1227 08:55:42.255165 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 (perms=drwx------)
I1227 08:55:42.255182 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
I1227 08:55:42.255201 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
I1227 08:55:42.255216 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:55:42.255227 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
I1227 08:55:42.255238 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
I1227 08:55:42.255257 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
I1227 08:55:42.255282 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1227 08:55:42.255298 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1227 08:55:42.255318 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
I1227 08:55:42.255333 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1227 08:55:42.255348 24108 main.go:144] libmachine: checking permissions on dir: /home
I1227 08:55:42.255359 24108 main.go:144] libmachine: skipping /home - not owner
I1227 08:55:42.255363 24108 main.go:144] libmachine: defining domain...
I1227 08:55:42.256580 24108 main.go:144] libmachine: defining domain using XML:
<domain type='kvm'>
<name>multinode-899276-m02</name>
<memory unit='MiB'>3072</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1227 08:55:42.265000 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:b3:04:b6 in network default
I1227 08:55:42.265650 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:42.265669 24108 main.go:144] libmachine: starting domain...
I1227 08:55:42.265674 24108 main.go:144] libmachine: ensuring networks are active...
I1227 08:55:42.266690 24108 main.go:144] libmachine: Ensuring network default is active
I1227 08:55:42.267245 24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
I1227 08:55:42.267833 24108 main.go:144] libmachine: getting domain XML...
I1227 08:55:42.269145 24108 main.go:144] libmachine: starting domain XML:
<domain type='kvm'>
<name>multinode-899276-m02</name>
<uuid>08f0927e-00b1-40b5-b768-ac07d0776e28</uuid>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:9b:0b:64'/>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:b3:04:b6'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1227 08:55:43.575420 24108 main.go:144] libmachine: waiting for domain to start...
I1227 08:55:43.576915 24108 main.go:144] libmachine: domain is now running
I1227 08:55:43.576935 24108 main.go:144] libmachine: waiting for IP...
I1227 08:55:43.577720 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:43.578257 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:43.578273 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:43.578564 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:43.833127 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:43.833729 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:43.833744 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:43.834083 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:44.161636 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:44.162394 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:44.162413 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:44.162749 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:44.477602 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:44.478263 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:44.478282 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:44.478685 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:44.857427 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:44.858004 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:44.858026 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:44.858397 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:45.619396 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:45.619938 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:45.619953 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:45.620268 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:46.214206 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:46.214738 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:46.214760 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:46.215107 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:47.368589 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:47.369148 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:47.369169 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:47.369473 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:48.790105 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:48.790775 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:48.790792 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:48.791137 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:50.057612 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:50.058205 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:50.058230 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:50.058563 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:51.571769 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:51.572501 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:51.572522 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:51.572969 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:54.369906 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:54.370596 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:54.370610 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:54.370961 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:57.241023 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.241672 24108 main.go:144] libmachine: domain multinode-899276-m02 has current primary IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.241689 24108 main.go:144] libmachine: found domain IP: 192.168.39.160
I1227 08:55:57.241696 24108 main.go:144] libmachine: reserving static IP address...
I1227 08:55:57.242083 24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276-m02", mac: "52:54:00:9b:0b:64", ip: "192.168.39.160"} in network mk-multinode-899276
I1227 08:55:57.450637 24108 main.go:144] libmachine: reserved static IP address 192.168.39.160 for domain multinode-899276-m02
I1227 08:55:57.450661 24108 main.go:144] libmachine: waiting for SSH...
I1227 08:55:57.450668 24108 main.go:144] libmachine: Getting to WaitForSSH function...
I1227 08:55:57.453744 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.454265 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.454291 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.454489 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.454732 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.454744 24108 main.go:144] libmachine: About to run SSH command:
exit 0
I1227 08:55:57.569604 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:55:57.570099 24108 main.go:144] libmachine: domain creation complete
I1227 08:55:57.571770 24108 machine.go:94] provisionDockerMachine start ...
I1227 08:55:57.574152 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.574608 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.574633 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.574862 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.575132 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.575147 24108 main.go:144] libmachine: About to run SSH command:
hostname
I1227 08:55:57.686687 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
I1227 08:55:57.686742 24108 buildroot.go:166] provisioning hostname "multinode-899276-m02"
I1227 08:55:57.689982 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.690439 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.690482 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.690712 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.690987 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.691006 24108 main.go:144] libmachine: About to run SSH command:
sudo hostname multinode-899276-m02 && echo "multinode-899276-m02" | sudo tee /etc/hostname
I1227 08:55:57.825642 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276-m02
I1227 08:55:57.828982 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.829434 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.829471 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.829664 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.829868 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.829883 24108 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-899276-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-899276-m02' | sudo tee -a /etc/hosts;
fi
fi
I1227 08:55:57.955353 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:55:57.955387 24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
I1227 08:55:57.955404 24108 buildroot.go:174] setting up certificates
I1227 08:55:57.955412 24108 provision.go:84] configureAuth start
I1227 08:55:57.958329 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.958721 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.958743 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.961212 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.961604 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.961634 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.961769 24108 provision.go:143] copyHostCerts
I1227 08:55:57.961801 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:55:57.961840 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
I1227 08:55:57.961853 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:55:57.961943 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
I1227 08:55:57.962064 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:55:57.962093 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
I1227 08:55:57.962101 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:55:57.962149 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
I1227 08:55:57.962220 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:55:57.962245 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
I1227 08:55:57.962253 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:55:57.962290 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
I1227 08:55:57.962357 24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276-m02 san=[127.0.0.1 192.168.39.160 localhost minikube multinode-899276-m02]
I1227 08:55:58.062355 24108 provision.go:177] copyRemoteCerts
I1227 08:55:58.062418 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 08:55:58.065702 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.066127 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.066154 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.066319 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
I1227 08:55:58.156852 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 08:55:58.156925 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 08:55:58.186973 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 08:55:58.187035 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I1227 08:55:58.216314 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 08:55:58.216378 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 08:55:58.250146 24108 provision.go:87] duration metric: took 294.721391ms to configureAuth
I1227 08:55:58.250177 24108 buildroot.go:189] setting minikube options for container-runtime
I1227 08:55:58.250357 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:55:58.252989 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.253461 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.253487 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.253690 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:58.253921 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:58.253934 24108 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 08:55:58.373697 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
I1227 08:55:58.373723 24108 buildroot.go:70] root file system type: tmpfs
I1227 08:55:58.373873 24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 08:55:58.376713 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.377114 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.377139 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.377329 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:58.377512 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:58.377555 24108 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment="NO_PROXY=192.168.39.24"
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 08:55:58.508330 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment=NO_PROXY=192.168.39.24
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 08:55:58.511413 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.511851 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.511879 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.512069 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:58.512332 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:58.512351 24108 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 08:55:59.431853 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
I1227 08:55:59.431877 24108 machine.go:97] duration metric: took 1.86008098s to provisionDockerMachine
I1227 08:55:59.431888 24108 client.go:176] duration metric: took 17.475186189s to LocalClient.Create
I1227 08:55:59.431902 24108 start.go:167] duration metric: took 17.47524121s to libmachine.API.Create "multinode-899276"
I1227 08:55:59.431909 24108 start.go:293] postStartSetup for "multinode-899276-m02" (driver="kvm2")
I1227 08:55:59.431918 24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 08:55:59.431968 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 08:55:59.434620 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.435132 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.435167 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.435355 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
I1227 08:55:59.525674 24108 ssh_runner.go:195] Run: cat /etc/os-release
I1227 08:55:59.530511 24108 info.go:137] Remote host: Buildroot 2025.02
I1227 08:55:59.530547 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
I1227 08:55:59.530632 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
I1227 08:55:59.530706 24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
I1227 08:55:59.530716 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
I1227 08:55:59.530821 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 08:55:59.542821 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:55:59.573575 24108 start.go:296] duration metric: took 141.651568ms for postStartSetup
I1227 08:55:59.576745 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.577190 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.577225 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.577486 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:55:59.577738 24108 start.go:128] duration metric: took 17.622900484s to createHost
I1227 08:55:59.579881 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.580246 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.580267 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.580524 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:59.580736 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:59.580748 24108 main.go:144] libmachine: About to run SSH command:
date +%s.%N
I1227 08:55:59.695810 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825759.656998713
I1227 08:55:59.695838 24108 fix.go:216] guest clock: 1766825759.656998713
I1227 08:55:59.695847 24108 fix.go:229] Guest: 2025-12-27 08:55:59.656998713 +0000 UTC Remote: 2025-12-27 08:55:59.577753428 +0000 UTC m=+82.275426938 (delta=79.245285ms)
I1227 08:55:59.695869 24108 fix.go:200] guest clock delta is within tolerance: 79.245285ms
I1227 08:55:59.695877 24108 start.go:83] releasing machines lock for "multinode-899276-m02", held for 17.741133225s
I1227 08:55:59.698823 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.699365 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.699403 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.701968 24108 out.go:179] * Found network options:
I1227 08:55:59.703396 24108 out.go:179] - NO_PROXY=192.168.39.24
W1227 08:55:59.704647 24108 proxy.go:120] fail to check proxy env: Error ip not in block
W1227 08:55:59.705042 24108 proxy.go:120] fail to check proxy env: Error ip not in block
I1227 08:55:59.705131 24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1227 08:55:59.705131 24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 08:55:59.708339 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.708387 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.708760 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.708817 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.708844 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.708889 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.709024 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
I1227 08:55:59.709228 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
W1227 08:55:59.793520 24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 08:55:59.793609 24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 08:55:59.816238 24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1227 08:55:59.816269 24108 start.go:496] detecting cgroup driver to use...
I1227 08:55:59.816301 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:55:59.816397 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:55:59.839936 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 08:55:59.852570 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 08:55:59.865005 24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 08:55:59.865103 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 08:55:59.877853 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:55:59.890799 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 08:55:59.903794 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:55:59.916281 24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 08:55:59.929816 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 08:55:59.942187 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 08:55:59.955245 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 08:55:59.968552 24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 08:55:59.979484 24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1227 08:55:59.979563 24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1227 08:55:59.993561 24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 08:56:00.006240 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:00.152118 24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 08:56:00.190124 24108 start.go:496] detecting cgroup driver to use...
I1227 08:56:00.190172 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:56:00.190230 24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 08:56:00.211952 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:56:00.237208 24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 08:56:00.259010 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:56:00.275879 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:56:00.293605 24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 08:56:00.326414 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:56:00.342364 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:56:00.365931 24108 ssh_runner.go:195] Run: which cri-dockerd
I1227 08:56:00.370257 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 08:56:00.382716 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 08:56:00.404739 24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 08:56:00.548335 24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 08:56:00.689510 24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 08:56:00.689570 24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 08:56:00.729510 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 08:56:00.746884 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:00.890844 24108 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 08:56:01.355108 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 08:56:01.370599 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 08:56:01.386540 24108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I1227 08:56:01.404096 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:56:01.419794 24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 08:56:01.561520 24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 08:56:01.708164 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:01.863090 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 08:56:01.899043 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 08:56:01.915288 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:02.062800 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 08:56:02.174498 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:56:02.198066 24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 08:56:02.198172 24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 08:56:02.204239 24108 start.go:574] Will wait 60s for crictl version
I1227 08:56:02.204318 24108 ssh_runner.go:195] Run: which crictl
I1227 08:56:02.208415 24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1227 08:56:02.242462 24108 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.2
RuntimeApiVersion: v1
I1227 08:56:02.242547 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:56:02.272210 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:56:02.305864 24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
I1227 08:56:02.307155 24108 out.go:179] - env NO_PROXY=192.168.39.24
I1227 08:56:02.310958 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:56:02.311334 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:56:02.311356 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:56:02.311519 24108 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1227 08:56:02.316034 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:56:02.330706 24108 mustload.go:66] Loading cluster: multinode-899276
I1227 08:56:02.330927 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:56:02.332363 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:56:02.332574 24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.160
I1227 08:56:02.332593 24108 certs.go:195] generating shared ca certs ...
I1227 08:56:02.332615 24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:56:02.332749 24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
I1227 08:56:02.332808 24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
I1227 08:56:02.332826 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 08:56:02.332851 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 08:56:02.332871 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 08:56:02.332887 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 08:56:02.332965 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
W1227 08:56:02.333010 24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
I1227 08:56:02.333027 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
I1227 08:56:02.333079 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
I1227 08:56:02.333119 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
I1227 08:56:02.333153 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
I1227 08:56:02.333216 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:56:02.333264 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.333285 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
I1227 08:56:02.333302 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
I1227 08:56:02.333328 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 08:56:02.365645 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1227 08:56:02.395629 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 08:56:02.425519 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 08:56:02.455554 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 08:56:02.486238 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
I1227 08:56:02.515842 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
I1227 08:56:02.545758 24108 ssh_runner.go:195] Run: openssl version
I1227 08:56:02.552395 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
I1227 08:56:02.564618 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
I1227 08:56:02.577235 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
I1227 08:56:02.582685 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
I1227 08:56:02.582759 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
I1227 08:56:02.590482 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 08:56:02.601896 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
I1227 08:56:02.613606 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.625518 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 08:56:02.637508 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.642823 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.642901 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.650764 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 08:56:02.663547 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 08:56:02.675853 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
I1227 08:56:02.688458 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
I1227 08:56:02.701658 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
I1227 08:56:02.706958 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
I1227 08:56:02.707033 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
I1227 08:56:02.714242 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 08:56:02.726789 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
I1227 08:56:02.740816 24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 08:56:02.745870 24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 08:56:02.745924 24108 kubeadm.go:935] updating node {m02 192.168.39.160 8443 v1.35.0 docker false true} ...
I1227 08:56:02.746010 24108 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 08:56:02.746115 24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 08:56:02.758129 24108 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
Initiating transfer...
I1227 08:56:02.758244 24108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
I1227 08:56:02.770426 24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
I1227 08:56:02.770451 24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
I1227 08:56:02.770474 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 08:56:02.770479 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm -> /var/lib/minikube/binaries/v1.35.0/kubeadm
I1227 08:56:02.770428 24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
I1227 08:56:02.770532 24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
I1227 08:56:02.770547 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl -> /var/lib/minikube/binaries/v1.35.0/kubectl
I1227 08:56:02.770638 24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
I1227 08:56:02.775599 24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
I1227 08:56:02.775636 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
I1227 08:56:02.800423 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet -> /var/lib/minikube/binaries/v1.35.0/kubelet
I1227 08:56:02.800448 24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
I1227 08:56:02.800474 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
I1227 08:56:02.800530 24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
I1227 08:56:02.847555 24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
I1227 08:56:02.847596 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
I1227 08:56:03.589571 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I1227 08:56:03.603768 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1227 08:56:03.631212 24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 08:56:03.655890 24108 ssh_runner.go:195] Run: grep 192.168.39.24 control-plane.minikube.internal$ /etc/hosts
I1227 08:56:03.660915 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:56:03.680065 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:03.823402 24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 08:56:03.862307 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:56:03.862561 24108 start.go:318] joinCluster: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 08:56:03.862676 24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
I1227 08:56:03.865388 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:56:03.865858 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:56:03.865900 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:56:03.866073 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:56:04.026904 24108 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
I1227 08:56:04.027011 24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9k0kod.6geqtmlyqvlg3686 --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-899276-m02"
I1227 08:56:04.959833 24108 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
I1227 08:56:05.276831 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false
I1227 08:56:05.365119 24108 start.go:320] duration metric: took 1.502556165s to joinCluster
I1227 08:56:05.367341 24108 out.go:203]
W1227 08:56:05.368707 24108 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
stdout:
stderr:
Error from server (NotFound): nodes "multinode-899276-m02" not found
X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
stdout:
stderr:
Error from server (NotFound): nodes "multinode-899276-m02" not found
W1227 08:56:05.368724 24108 out.go:285] *
*
W1227 08:56:05.369029 24108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 08:56:05.370349 24108 out.go:203]
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestMultiNode/serial/FreshStart2Nodes]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-899276 -n multinode-899276
helpers_test.go:253: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p multinode-899276 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p multinode-899276 logs -n 25: (1.029886594s)
helpers_test.go:261: TestMultiNode/serial/FreshStart2Nodes logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ start │ -p json-output-error-635110 --memory=3072 --output=json --wait=true --driver=fail │ json-output-error-635110 │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ │
│ delete │ -p json-output-error-635110 │ json-output-error-635110 │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ 27 Dec 25 08:52 UTC │
│ start │ -p first-739389 --driver=kvm2 │ first-739389 │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ 27 Dec 25 08:52 UTC │
│ start │ -p second-741777 --driver=kvm2 │ second-741777 │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ 27 Dec 25 08:53 UTC │
│ delete │ -p second-741777 │ second-741777 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
│ delete │ -p first-739389 │ first-739389 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
│ start │ -p mount-start-1-817954 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
│ mount │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-1-817954 --v 0 --9p-version 9p2000.L --gid 0 --ip --msize 6543 --port 46464 --type 9p --uid 0 │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ │
│ ssh │ mount-start-1-817954 ssh -- ls /minikube-host │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
│ ssh │ mount-start-1-817954 ssh -- findmnt --json /minikube-host │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
│ start │ -p mount-start-2-834751 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:54 UTC │
│ mount │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-2-834751 --v 0 --9p-version 9p2000.L --gid 0 --ip --msize 6543 --port 46465 --type 9p --uid 0 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ │
│ ssh │ mount-start-2-834751 ssh -- ls /minikube-host │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ ssh │ mount-start-2-834751 ssh -- findmnt --json /minikube-host │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ delete │ -p mount-start-1-817954 --alsologtostderr -v=5 │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ ssh │ mount-start-2-834751 ssh -- ls /minikube-host │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ ssh │ mount-start-2-834751 ssh -- findmnt --json /minikube-host │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ stop │ -p mount-start-2-834751 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ start │ -p mount-start-2-834751 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ mount │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-2-834751 --v 0 --9p-version 9p2000.L --gid 0 --ip --msize 6543 --port 46465 --type 9p --uid 0 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ │
│ ssh │ mount-start-2-834751 ssh -- ls /minikube-host │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ ssh │ mount-start-2-834751 ssh -- findmnt --json /minikube-host │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ delete │ -p mount-start-2-834751 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ delete │ -p mount-start-1-817954 │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
│ start │ -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 │ multinode-899276 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/27 08:54:37
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1227 08:54:37.348894 24108 out.go:360] Setting OutFile to fd 1 ...
I1227 08:54:37.349196 24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:54:37.349207 24108 out.go:374] Setting ErrFile to fd 2...
I1227 08:54:37.349214 24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:54:37.349401 24108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:54:37.349901 24108 out.go:368] Setting JSON to false
I1227 08:54:37.350702 24108 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2227,"bootTime":1766823450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1227 08:54:37.350761 24108 start.go:143] virtualization: kvm guest
I1227 08:54:37.352914 24108 out.go:179] * [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1227 08:54:37.354122 24108 notify.go:221] Checking for updates...
I1227 08:54:37.354140 24108 out.go:179] - MINIKUBE_LOCATION=22344
I1227 08:54:37.355599 24108 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 08:54:37.356985 24108 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
I1227 08:54:37.358228 24108 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:54:37.359373 24108 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1227 08:54:37.360648 24108 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 08:54:37.362069 24108 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 08:54:37.398292 24108 out.go:179] * Using the kvm2 driver based on user configuration
I1227 08:54:37.399595 24108 start.go:309] selected driver: kvm2
I1227 08:54:37.399614 24108 start.go:928] validating driver "kvm2" against <nil>
I1227 08:54:37.399634 24108 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 08:54:37.400332 24108 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 08:54:37.400590 24108 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 08:54:37.400626 24108 cni.go:84] Creating CNI manager for ""
I1227 08:54:37.400682 24108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I1227 08:54:37.400692 24108 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1227 08:54:37.400744 24108 start.go:353] cluster config:
{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 08:54:37.400897 24108 iso.go:125] acquiring lock: {Name:mkf3af0a60e6ccee2eeb813de50903ed5d7e8922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 08:54:37.402631 24108 out.go:179] * Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
I1227 08:54:37.403816 24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:54:37.403844 24108 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
I1227 08:54:37.403854 24108 cache.go:65] Caching tarball of preloaded images
I1227 08:54:37.403951 24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1227 08:54:37.403967 24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 08:54:37.404346 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:54:37.404374 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json: {Name:mk5e07ed738ae868a23976588c175a8cb2b30a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:54:37.404563 24108 start.go:360] acquireMachinesLock for multinode-899276: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1227 08:54:37.404598 24108 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "multinode-899276"
I1227 08:54:37.404622 24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 08:54:37.404675 24108 start.go:125] createHost starting for "" (driver="kvm2")
I1227 08:54:37.407102 24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I1227 08:54:37.407274 24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
I1227 08:54:37.407306 24108 client.go:173] LocalClient.Create starting
I1227 08:54:37.407365 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
I1227 08:54:37.407409 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:54:37.407425 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:54:37.407478 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
I1227 08:54:37.407496 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:54:37.407507 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:54:37.407806 24108 main.go:144] libmachine: creating domain...
I1227 08:54:37.407817 24108 main.go:144] libmachine: creating network...
I1227 08:54:37.409512 24108 main.go:144] libmachine: found existing default network
I1227 08:54:37.409702 24108 main.go:144] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1227 08:54:37.410292 24108 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001caea70}
I1227 08:54:37.410380 24108 main.go:144] libmachine: defining private network:
<network>
<name>mk-multinode-899276</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1227 08:54:37.416200 24108 main.go:144] libmachine: creating private network mk-multinode-899276 192.168.39.0/24...
I1227 08:54:37.484690 24108 main.go:144] libmachine: private network mk-multinode-899276 192.168.39.0/24 created
I1227 08:54:37.484994 24108 main.go:144] libmachine: <network>
<name>mk-multinode-899276</name>
<uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:7e:96:0f'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1227 08:54:37.485088 24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
I1227 08:54:37.485112 24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
I1227 08:54:37.485123 24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:54:37.485174 24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
I1227 08:54:37.708878 24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa...
I1227 08:54:37.789981 24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk...
I1227 08:54:37.790024 24108 main.go:144] libmachine: Writing magic tar header
I1227 08:54:37.790040 24108 main.go:144] libmachine: Writing SSH key tar header
I1227 08:54:37.790127 24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
I1227 08:54:37.790183 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276
I1227 08:54:37.790204 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 (perms=drwx------)
I1227 08:54:37.790215 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
I1227 08:54:37.790225 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
I1227 08:54:37.790238 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:54:37.790249 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
I1227 08:54:37.790257 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
I1227 08:54:37.790265 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
I1227 08:54:37.790275 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1227 08:54:37.790287 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1227 08:54:37.790303 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
I1227 08:54:37.790313 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1227 08:54:37.790321 24108 main.go:144] libmachine: checking permissions on dir: /home
I1227 08:54:37.790330 24108 main.go:144] libmachine: skipping /home - not owner
I1227 08:54:37.790334 24108 main.go:144] libmachine: defining domain...
I1227 08:54:37.792061 24108 main.go:144] libmachine: defining domain using XML:
<domain type='kvm'>
<name>multinode-899276</name>
<memory unit='MiB'>3072</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1227 08:54:37.797217 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:e2:49:84 in network default
I1227 08:54:37.797913 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:37.797931 24108 main.go:144] libmachine: starting domain...
I1227 08:54:37.797936 24108 main.go:144] libmachine: ensuring networks are active...
I1227 08:54:37.798746 24108 main.go:144] libmachine: Ensuring network default is active
I1227 08:54:37.799132 24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
I1227 08:54:37.799776 24108 main.go:144] libmachine: getting domain XML...
I1227 08:54:37.800794 24108 main.go:144] libmachine: starting domain XML:
<domain type='kvm'>
<name>multinode-899276</name>
<uuid>6d370929-9382-4953-8ba6-4fb6eca3e648</uuid>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:4c:5c:b4'/>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:e2:49:84'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1227 08:54:39.079279 24108 main.go:144] libmachine: waiting for domain to start...
I1227 08:54:39.080610 24108 main.go:144] libmachine: domain is now running
I1227 08:54:39.080624 24108 main.go:144] libmachine: waiting for IP...
I1227 08:54:39.081451 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.082023 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.082037 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.082336 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:39.082377 24108 retry.go:84] will retry after 200ms: waiting for domain to come up
I1227 08:54:39.326020 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.326723 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.326741 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.327098 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:39.575768 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.576511 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.576534 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.576883 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:39.876331 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:39.877091 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:39.877107 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:39.877413 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:40.370368 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:40.371069 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:40.371086 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:40.371431 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:40.865483 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:40.866211 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:40.866236 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:40.866603 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:41.484623 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:41.485260 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:41.485279 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:41.485638 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:42.393849 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:42.394445 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:42.394463 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:42.394914 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:43.319225 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:43.320003 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:43.320020 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:43.320334 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:44.724122 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:44.724874 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:44.724891 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:44.725237 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:46.322345 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:46.323107 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:46.323130 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:46.323457 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:48.157422 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:48.158091 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:48.158110 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:48.158455 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:51.501875 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:51.502515 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
I1227 08:54:51.502530 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:54:51.502791 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:54:51.502830 24108 retry.go:84] will retry after 4.3s: waiting for domain to come up
I1227 08:54:55.837835 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:55.838577 24108 main.go:144] libmachine: domain multinode-899276 has current primary IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:55.838596 24108 main.go:144] libmachine: found domain IP: 192.168.39.24
I1227 08:54:55.838605 24108 main.go:144] libmachine: reserving static IP address...
I1227 08:54:55.839242 24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276", mac: "52:54:00:4c:5c:b4", ip: "192.168.39.24"} in network mk-multinode-899276
I1227 08:54:56.025597 24108 main.go:144] libmachine: reserved static IP address 192.168.39.24 for domain multinode-899276
I1227 08:54:56.025623 24108 main.go:144] libmachine: waiting for SSH...
I1227 08:54:56.025631 24108 main.go:144] libmachine: Getting to WaitForSSH function...
I1227 08:54:56.028518 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.029028 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.029077 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.029273 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.029482 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.029494 24108 main.go:144] libmachine: About to run SSH command:
exit 0
I1227 08:54:56.143804 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:54:56.144248 24108 main.go:144] libmachine: domain creation complete
I1227 08:54:56.146013 24108 machine.go:94] provisionDockerMachine start ...
I1227 08:54:56.148712 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.149157 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.149206 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.149383 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.149565 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.149574 24108 main.go:144] libmachine: About to run SSH command:
hostname
I1227 08:54:56.263810 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
I1227 08:54:56.263841 24108 buildroot.go:166] provisioning hostname "multinode-899276"
I1227 08:54:56.266910 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.267410 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.267435 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.267640 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.267847 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.267858 24108 main.go:144] libmachine: About to run SSH command:
sudo hostname multinode-899276 && echo "multinode-899276" | sudo tee /etc/hostname
I1227 08:54:56.401325 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276
I1227 08:54:56.404664 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.405235 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.405263 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.405433 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.405644 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.405659 24108 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-899276' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276/g' /etc/hosts;
else
echo '127.0.1.1 multinode-899276' | sudo tee -a /etc/hosts;
fi
fi
I1227 08:54:56.543193 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:54:56.543230 24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
I1227 08:54:56.543264 24108 buildroot.go:174] setting up certificates
I1227 08:54:56.543282 24108 provision.go:84] configureAuth start
I1227 08:54:56.546171 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.546588 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.546612 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.548760 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.549114 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.549136 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.549243 24108 provision.go:143] copyHostCerts
I1227 08:54:56.549266 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:54:56.549290 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
I1227 08:54:56.549298 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:54:56.549370 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
I1227 08:54:56.549490 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:54:56.549516 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
I1227 08:54:56.549522 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:54:56.549548 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
I1227 08:54:56.549593 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:54:56.549609 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
I1227 08:54:56.549615 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:54:56.549634 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
I1227 08:54:56.549680 24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-899276]
I1227 08:54:56.564952 24108 provision.go:177] copyRemoteCerts
I1227 08:54:56.565003 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 08:54:56.567240 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.567643 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.567677 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.567850 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:56.656198 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 08:54:56.656292 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 08:54:56.685216 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 08:54:56.685304 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1227 08:54:56.714733 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 08:54:56.714819 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 08:54:56.743305 24108 provision.go:87] duration metric: took 199.989326ms to configureAuth
I1227 08:54:56.743338 24108 buildroot.go:189] setting minikube options for container-runtime
I1227 08:54:56.743528 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:54:56.746235 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.746587 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.746606 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.746782 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.747027 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.747039 24108 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 08:54:56.861225 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
I1227 08:54:56.861255 24108 buildroot.go:70] root file system type: tmpfs
I1227 08:54:56.861417 24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 08:54:56.864305 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.864731 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.864767 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.864925 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:56.865130 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:56.865170 24108 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 08:54:56.996399 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 08:54:56.999444 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:56.999882 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:56.999912 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:57.000156 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:57.000379 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:57.000396 24108 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 08:54:57.924795 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
I1227 08:54:57.924823 24108 machine.go:97] duration metric: took 1.778786884s to provisionDockerMachine
I1227 08:54:57.924839 24108 client.go:176] duration metric: took 20.517522558s to LocalClient.Create
I1227 08:54:57.924853 24108 start.go:167] duration metric: took 20.517578026s to libmachine.API.Create "multinode-899276"
I1227 08:54:57.924862 24108 start.go:293] postStartSetup for "multinode-899276" (driver="kvm2")
I1227 08:54:57.924874 24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 08:54:57.924962 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 08:54:57.927733 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:57.928188 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:57.928219 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:57.928364 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:58.017094 24108 ssh_runner.go:195] Run: cat /etc/os-release
I1227 08:54:58.021892 24108 info.go:137] Remote host: Buildroot 2025.02
I1227 08:54:58.021927 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
I1227 08:54:58.022001 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
I1227 08:54:58.022108 24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
I1227 08:54:58.022115 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
I1227 08:54:58.022194 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 08:54:58.035018 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:54:58.064746 24108 start.go:296] duration metric: took 139.872084ms for postStartSetup
I1227 08:54:58.067860 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.068279 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.068306 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.068579 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:54:58.068756 24108 start.go:128] duration metric: took 20.664071028s to createHost
I1227 08:54:58.071566 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.072015 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.072040 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.072244 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:54:58.072473 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1227 08:54:58.072488 24108 main.go:144] libmachine: About to run SSH command:
date +%s.%N
I1227 08:54:58.187322 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825698.156416973
I1227 08:54:58.187344 24108 fix.go:216] guest clock: 1766825698.156416973
I1227 08:54:58.187351 24108 fix.go:229] Guest: 2025-12-27 08:54:58.156416973 +0000 UTC Remote: 2025-12-27 08:54:58.068766977 +0000 UTC m=+20.766440443 (delta=87.649996ms)
I1227 08:54:58.187367 24108 fix.go:200] guest clock delta is within tolerance: 87.649996ms
I1227 08:54:58.187371 24108 start.go:83] releasing machines lock for "multinode-899276", held for 20.782762567s
I1227 08:54:58.189878 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.190311 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.190336 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.190848 24108 ssh_runner.go:195] Run: cat /version.json
I1227 08:54:58.190934 24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 08:54:58.193909 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.193920 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.194367 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.194393 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.194412 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:54:58.194445 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:54:58.194571 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:58.194749 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:54:58.303202 24108 ssh_runner.go:195] Run: systemctl --version
I1227 08:54:58.309380 24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 08:54:58.315530 24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 08:54:58.315591 24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 08:54:58.335551 24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1227 08:54:58.335587 24108 start.go:496] detecting cgroup driver to use...
I1227 08:54:58.335615 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:54:58.335736 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:54:58.357443 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 08:54:58.369407 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 08:54:58.384702 24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 08:54:58.384807 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 08:54:58.399640 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:54:58.412464 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 08:54:58.424691 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:54:58.437707 24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 08:54:58.450402 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 08:54:58.462916 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 08:54:58.475650 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 08:54:58.493530 24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 08:54:58.504139 24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1227 08:54:58.504192 24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1227 08:54:58.516423 24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 08:54:58.528272 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:54:58.673716 24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 08:54:58.720867 24108 start.go:496] detecting cgroup driver to use...
I1227 08:54:58.720909 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:54:58.720972 24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 08:54:58.744526 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:54:58.764985 24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 08:54:58.785879 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:54:58.803205 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:54:58.821885 24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 08:54:58.856773 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:54:58.873676 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:54:58.896773 24108 ssh_runner.go:195] Run: which cri-dockerd
I1227 08:54:58.901095 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 08:54:58.912977 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 08:54:58.935679 24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 08:54:59.087073 24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 08:54:59.235233 24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 08:54:59.235368 24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 08:54:59.257291 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 08:54:59.273342 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:54:59.413736 24108 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 08:54:59.868087 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 08:54:59.883321 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 08:54:59.898581 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:54:59.913286 24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 08:55:00.062974 24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 08:55:00.214186 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:00.363957 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 08:55:00.400471 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 08:55:00.416741 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:00.560590 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 08:55:00.668182 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:55:00.687244 24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 08:55:00.687326 24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 08:55:00.693883 24108 start.go:574] Will wait 60s for crictl version
I1227 08:55:00.693968 24108 ssh_runner.go:195] Run: which crictl
I1227 08:55:00.698083 24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1227 08:55:00.732884 24108 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.2
RuntimeApiVersion: v1
I1227 08:55:00.732961 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:55:00.764467 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:55:00.793639 24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
I1227 08:55:00.796490 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:00.796890 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:55:00.796916 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:00.797129 24108 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1227 08:55:00.801979 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:55:00.819694 24108 kubeadm.go:884] updating cluster {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 08:55:00.819800 24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:55:00.819853 24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 08:55:00.841928 24108 docker.go:694] Got preloaded images:
I1227 08:55:00.841951 24108 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0 wasn't preloaded
I1227 08:55:00.841997 24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1227 08:55:00.855548 24108 ssh_runner.go:195] Run: which lz4
I1227 08:55:00.860486 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I1227 08:55:00.860594 24108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1227 08:55:00.865387 24108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1227 08:55:00.865417 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284632523 bytes)
I1227 08:55:01.961740 24108 docker.go:658] duration metric: took 1.101175277s to copy over tarball
I1227 08:55:01.961831 24108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1227 08:55:03.184079 24108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.222186343s)
I1227 08:55:03.184117 24108 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1227 08:55:03.216811 24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1227 08:55:03.229331 24108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
I1227 08:55:03.250420 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 08:55:03.266159 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:03.414345 24108 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 08:55:05.441484 24108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.027089175s)
I1227 08:55:05.441602 24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 08:55:05.460483 24108 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 08:55:05.460508 24108 cache_images.go:86] Images are preloaded, skipping loading
I1227 08:55:05.460517 24108 kubeadm.go:935] updating node { 192.168.39.24 8443 v1.35.0 docker true true} ...
I1227 08:55:05.460610 24108 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 08:55:05.460667 24108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1227 08:55:05.512991 24108 cni.go:84] Creating CNI manager for ""
I1227 08:55:05.513022 24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I1227 08:55:05.513043 24108 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 08:55:05.513080 24108 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899276 NodeName:multinode-899276 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 08:55:05.513228 24108 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.24
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-899276"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.24"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 08:55:05.513292 24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 08:55:05.525546 24108 binaries.go:51] Found k8s binaries, skipping transfer
I1227 08:55:05.525616 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 08:55:05.537237 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
I1227 08:55:05.557993 24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 08:55:05.579343 24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
I1227 08:55:05.600550 24108 ssh_runner.go:195] Run: grep 192.168.39.24 control-plane.minikube.internal$ /etc/hosts
I1227 08:55:05.605151 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:55:05.620984 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:05.769960 24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 08:55:05.800659 24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.24
I1227 08:55:05.800681 24108 certs.go:195] generating shared ca certs ...
I1227 08:55:05.800706 24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.800879 24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
I1227 08:55:05.800934 24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
I1227 08:55:05.800949 24108 certs.go:257] generating profile certs ...
I1227 08:55:05.801012 24108 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key
I1227 08:55:05.801071 24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt with IP's: []
I1227 08:55:05.940834 24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt ...
I1227 08:55:05.940874 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt: {Name:mk02178aca7f56d432d5f5e37ab494f5434cad17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.941124 24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key ...
I1227 08:55:05.941147 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key: {Name:mk6471e99270ac274eb8d161834a8e74a99ce837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.941271 24108 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d
I1227 08:55:05.941294 24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
I1227 08:55:05.986153 24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d ...
I1227 08:55:05.986188 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d: {Name:mk802401bb34f0577b94f18188268edd10cab228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.986405 24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d ...
I1227 08:55:05.986426 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d: {Name:mk499be31979f3e860f435493b7a49f6c8a77f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:05.986541 24108 certs.go:382] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt
I1227 08:55:05.986669 24108 certs.go:386] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key
I1227 08:55:05.986770 24108 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key
I1227 08:55:05.986801 24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt with IP's: []
I1227 08:55:06.117402 24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt ...
I1227 08:55:06.117436 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt: {Name:mkff498d36179d0686c029b1a0d2c2aa28970730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:06.117638 24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key ...
I1227 08:55:06.117659 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key: {Name:mkae01040e0a5553a361620eb1dc3658cbd20bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:06.117774 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 08:55:06.117805 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 08:55:06.117825 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 08:55:06.117845 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 08:55:06.117861 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 08:55:06.117875 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 08:55:06.117888 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 08:55:06.117906 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 08:55:06.117969 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
W1227 08:55:06.118021 24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
I1227 08:55:06.118034 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
I1227 08:55:06.118087 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
I1227 08:55:06.118141 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
I1227 08:55:06.118179 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
I1227 08:55:06.118236 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:55:06.118294 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.118318 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
I1227 08:55:06.118337 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
I1227 08:55:06.118857 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 08:55:06.150178 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1227 08:55:06.179223 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 08:55:06.208476 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 08:55:06.239094 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1227 08:55:06.268368 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 08:55:06.297730 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 08:55:06.326802 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 08:55:06.357205 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 08:55:06.387582 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
I1227 08:55:06.417521 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
I1227 08:55:06.449486 24108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 08:55:06.473842 24108 ssh_runner.go:195] Run: openssl version
I1227 08:55:06.481673 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
I1227 08:55:06.494727 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
I1227 08:55:06.506605 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
I1227 08:55:06.511904 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
I1227 08:55:06.511979 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
I1227 08:55:06.522748 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 08:55:06.535114 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
I1227 08:55:06.546799 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.558007 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 08:55:06.569782 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.575189 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.575271 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 08:55:06.582359 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 08:55:06.594977 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 08:55:06.606187 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
I1227 08:55:06.617464 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
I1227 08:55:06.628478 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
I1227 08:55:06.633627 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
I1227 08:55:06.633684 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
I1227 08:55:06.640779 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 08:55:06.652579 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
I1227 08:55:06.663960 24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 08:55:06.668886 24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 08:55:06.668953 24108 kubeadm.go:401] StartCluster: {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 08:55:06.669105 24108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1227 08:55:06.684838 24108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 08:55:06.696256 24108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 08:55:06.708324 24108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 08:55:06.720681 24108 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 08:55:06.720728 24108 kubeadm.go:158] found existing configuration files:
I1227 08:55:06.720787 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 08:55:06.731330 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 08:55:06.731392 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 08:55:06.744324 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 08:55:06.754995 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 08:55:06.755091 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 08:55:06.767513 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 08:55:06.778490 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 08:55:06.778576 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 08:55:06.789929 24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 08:55:06.800709 24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 08:55:06.800794 24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 08:55:06.812666 24108 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1227 08:55:07.024456 24108 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 08:55:15.975818 24108 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 08:55:15.975905 24108 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 08:55:15.976023 24108 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 08:55:15.976153 24108 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 08:55:15.976280 24108 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 08:55:15.976375 24108 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 08:55:15.977966 24108 out.go:252] - Generating certificates and keys ...
I1227 08:55:15.978092 24108 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 08:55:15.978154 24108 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 08:55:15.978227 24108 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 08:55:15.978279 24108 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 08:55:15.978354 24108 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 08:55:15.978437 24108 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 08:55:15.978507 24108 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 08:55:15.978652 24108 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
I1227 08:55:15.978708 24108 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 08:55:15.978817 24108 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
I1227 08:55:15.978879 24108 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 08:55:15.978934 24108 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 08:55:15.979025 24108 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 08:55:15.979124 24108 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 08:55:15.979189 24108 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 08:55:15.979284 24108 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 08:55:15.979376 24108 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 08:55:15.979463 24108 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 08:55:15.979528 24108 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 08:55:15.979667 24108 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 08:55:15.979731 24108 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 08:55:15.981818 24108 out.go:252] - Booting up control plane ...
I1227 08:55:15.981903 24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 08:55:15.981981 24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 08:55:15.982067 24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 08:55:15.982163 24108 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 08:55:15.982243 24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 08:55:15.982343 24108 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 08:55:15.982416 24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 08:55:15.982468 24108 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 08:55:15.982635 24108 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 08:55:15.982810 24108 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 08:55:15.982906 24108 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001479517s
I1227 08:55:15.983060 24108 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1227 08:55:15.983187 24108 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
I1227 08:55:15.983294 24108 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1227 08:55:15.983366 24108 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1227 08:55:15.983434 24108 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508222077s
I1227 08:55:15.983490 24108 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.795811505s
I1227 08:55:15.983547 24108 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00280761s
I1227 08:55:15.983634 24108 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1227 08:55:15.983743 24108 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1227 08:55:15.983806 24108 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1227 08:55:15.983962 24108 kubeadm.go:319] [mark-control-plane] Marking the node multinode-899276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1227 08:55:15.984029 24108 kubeadm.go:319] [bootstrap-token] Using token: 8gubmu.jzeht1x7riked3vp
I1227 08:55:15.985339 24108 out.go:252] - Configuring RBAC rules ...
I1227 08:55:15.985468 24108 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1227 08:55:15.985590 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1227 08:55:15.985836 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1227 08:55:15.985963 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1227 08:55:15.986071 24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1227 08:55:15.986140 24108 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1227 08:55:15.986233 24108 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1227 08:55:15.986269 24108 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1227 08:55:15.986315 24108 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1227 08:55:15.986323 24108 kubeadm.go:319]
I1227 08:55:15.986381 24108 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1227 08:55:15.986390 24108 kubeadm.go:319]
I1227 08:55:15.986465 24108 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1227 08:55:15.986474 24108 kubeadm.go:319]
I1227 08:55:15.986507 24108 kubeadm.go:319] mkdir -p $HOME/.kube
I1227 08:55:15.986576 24108 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1227 08:55:15.986650 24108 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1227 08:55:15.986662 24108 kubeadm.go:319]
I1227 08:55:15.986752 24108 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1227 08:55:15.986762 24108 kubeadm.go:319]
I1227 08:55:15.986803 24108 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1227 08:55:15.986808 24108 kubeadm.go:319]
I1227 08:55:15.986860 24108 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1227 08:55:15.986924 24108 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1227 08:55:15.986987 24108 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1227 08:55:15.986995 24108 kubeadm.go:319]
I1227 08:55:15.987083 24108 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1227 08:55:15.987152 24108 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1227 08:55:15.987157 24108 kubeadm.go:319]
I1227 08:55:15.987230 24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
I1227 08:55:15.987318 24108 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c \
I1227 08:55:15.987337 24108 kubeadm.go:319] --control-plane
I1227 08:55:15.987343 24108 kubeadm.go:319]
I1227 08:55:15.987420 24108 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1227 08:55:15.987428 24108 kubeadm.go:319]
I1227 08:55:15.987499 24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
I1227 08:55:15.987622 24108 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c
I1227 08:55:15.987640 24108 cni.go:84] Creating CNI manager for ""
I1227 08:55:15.987649 24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I1227 08:55:15.989869 24108 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1227 08:55:15.990980 24108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1227 08:55:15.997094 24108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I1227 08:55:15.997119 24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I1227 08:55:16.018807 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1227 08:55:16.327079 24108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1227 08:55:16.327141 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:16.327146 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276 minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=true
I1227 08:55:16.365159 24108 ops.go:34] apiserver oom_adj: -16
I1227 08:55:16.465863 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:16.966866 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:17.466570 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:17.966578 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:18.466519 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:18.966943 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:19.466148 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:19.966252 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:20.466874 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 08:55:20.559551 24108 kubeadm.go:1114] duration metric: took 4.232470194s to wait for elevateKubeSystemPrivileges
I1227 08:55:20.559594 24108 kubeadm.go:403] duration metric: took 13.890642839s to StartCluster
I1227 08:55:20.559615 24108 settings.go:142] acquiring lock: {Name:mk44fcba3019847ba7794682dc7fa5d4c6839e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:20.559700 24108 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22344-5516/kubeconfig
I1227 08:55:20.560349 24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/kubeconfig: {Name:mk9f130990d4b2bd0dfe5788b549d55d90047007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:55:20.560606 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1227 08:55:20.560624 24108 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1227 08:55:20.560698 24108 addons.go:70] Setting storage-provisioner=true in profile "multinode-899276"
I1227 08:55:20.560599 24108 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 08:55:20.560734 24108 addons.go:70] Setting default-storageclass=true in profile "multinode-899276"
I1227 08:55:20.560754 24108 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "multinode-899276"
I1227 08:55:20.560889 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:55:20.560722 24108 addons.go:239] Setting addon storage-provisioner=true in "multinode-899276"
I1227 08:55:20.560976 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:55:20.563353 24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1227 08:55:20.563858 24108 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1227 08:55:20.563881 24108 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1227 08:55:20.563887 24108 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1227 08:55:20.563895 24108 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
I1227 08:55:20.563910 24108 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
I1227 08:55:20.563922 24108 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
I1227 08:55:20.563927 24108 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
I1227 08:55:20.564267 24108 addons.go:239] Setting addon default-storageclass=true in "multinode-899276"
I1227 08:55:20.564296 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:55:20.566001 24108 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1227 08:55:20.566022 24108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1227 08:55:20.566660 24108 out.go:179] * Verifying Kubernetes components...
I1227 08:55:20.566668 24108 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1227 08:55:20.568005 24108 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1227 08:55:20.568024 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:55:20.568027 24108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1227 08:55:20.568764 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.569218 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:55:20.569253 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.569506 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:55:20.570678 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.571119 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:55:20.571146 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:55:20.571271 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:55:20.721800 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1227 08:55:20.853268 24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 08:55:21.022237 24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1227 08:55:21.022257 24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1227 08:55:21.456081 24108 start.go:987] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1227 08:55:21.456682 24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1227 08:55:21.456749 24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1227 08:55:21.457033 24108 node_ready.go:35] waiting up to 6m0s for node "multinode-899276" to be "Ready" ...
I1227 08:55:21.828507 24108 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1227 08:55:21.829821 24108 addons.go:530] duration metric: took 1.269198648s for enable addons: enabled=[storage-provisioner default-storageclass]
I1227 08:55:21.962140 24108 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-899276" context rescaled to 1 replicas
W1227 08:55:23.460520 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:25.461678 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:27.960886 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:30.459943 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:32.460468 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:34.460900 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:36.960939 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
W1227 08:55:39.460258 24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
I1227 08:55:40.960160 24108 node_ready.go:49] node "multinode-899276" is "Ready"
I1227 08:55:40.960196 24108 node_ready.go:38] duration metric: took 19.503123053s for node "multinode-899276" to be "Ready" ...
I1227 08:55:40.960216 24108 api_server.go:52] waiting for apiserver process to appear ...
I1227 08:55:40.960272 24108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1227 08:55:40.980487 24108 api_server.go:72] duration metric: took 20.419735752s to wait for apiserver process to appear ...
I1227 08:55:40.980522 24108 api_server.go:88] waiting for apiserver healthz status ...
I1227 08:55:40.980545 24108 api_server.go:299] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
I1227 08:55:40.985397 24108 api_server.go:325] https://192.168.39.24:8443/healthz returned 200:
ok
I1227 08:55:40.986902 24108 api_server.go:141] control plane version: v1.35.0
I1227 08:55:40.986929 24108 api_server.go:131] duration metric: took 6.398762ms to wait for apiserver health ...
I1227 08:55:40.986938 24108 system_pods.go:43] waiting for kube-system pods to appear ...
I1227 08:55:40.990608 24108 system_pods.go:59] 8 kube-system pods found
I1227 08:55:40.990654 24108 system_pods.go:61] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:40.990664 24108 system_pods.go:61] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:40.990674 24108 system_pods.go:61] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:40.990682 24108 system_pods.go:61] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:40.990688 24108 system_pods.go:61] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:40.990698 24108 system_pods.go:61] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:40.990703 24108 system_pods.go:61] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:40.990715 24108 system_pods.go:61] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:40.990723 24108 system_pods.go:74] duration metric: took 3.778634ms to wait for pod list to return data ...
I1227 08:55:40.990733 24108 default_sa.go:34] waiting for default service account to be created ...
I1227 08:55:40.993709 24108 default_sa.go:45] found service account: "default"
I1227 08:55:40.993729 24108 default_sa.go:55] duration metric: took 2.988456ms for default service account to be created ...
I1227 08:55:40.993736 24108 system_pods.go:116] waiting for k8s-apps to be running ...
I1227 08:55:40.996625 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:40.996661 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:40.996672 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:40.996683 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:40.996690 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:40.996698 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:40.996709 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:40.996716 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:40.996727 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:40.996757 24108 retry.go:84] will retry after 200ms: missing components: kube-dns
I1227 08:55:41.222991 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:41.223041 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:41.223072 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:41.223082 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:41.223088 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:41.223095 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:41.223101 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:41.223107 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:41.223115 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:41.595420 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:41.595456 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 08:55:41.595463 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:41.595468 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:41.595472 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:41.595476 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:41.595479 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:41.595482 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:41.595487 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 08:55:41.921377 24108 system_pods.go:86] 8 kube-system pods found
I1227 08:55:41.921417 24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Running
I1227 08:55:41.921426 24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
I1227 08:55:41.921432 24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
I1227 08:55:41.921437 24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
I1227 08:55:41.921443 24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
I1227 08:55:41.921448 24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
I1227 08:55:41.921453 24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
I1227 08:55:41.921458 24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Running
I1227 08:55:41.921468 24108 system_pods.go:126] duration metric: took 927.725772ms to wait for k8s-apps to be running ...
I1227 08:55:41.921482 24108 system_svc.go:44] waiting for kubelet service to be running ....
I1227 08:55:41.921538 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 08:55:41.943521 24108 system_svc.go:56] duration metric: took 22.03282ms WaitForService to wait for kubelet
I1227 08:55:41.943547 24108 kubeadm.go:587] duration metric: took 21.382801319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 08:55:41.943563 24108 node_conditions.go:102] verifying NodePressure condition ...
I1227 08:55:41.946923 24108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1227 08:55:41.946949 24108 node_conditions.go:123] node cpu capacity is 2
I1227 08:55:41.946964 24108 node_conditions.go:105] duration metric: took 3.396847ms to run NodePressure ...
I1227 08:55:41.946975 24108 start.go:242] waiting for startup goroutines ...
I1227 08:55:41.946982 24108 start.go:247] waiting for cluster config update ...
I1227 08:55:41.946995 24108 start.go:256] writing updated cluster config ...
I1227 08:55:41.949394 24108 out.go:203]
I1227 08:55:41.951062 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:55:41.951143 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:55:41.952889 24108 out.go:179] * Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
I1227 08:55:41.954248 24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:55:41.954267 24108 cache.go:65] Caching tarball of preloaded images
I1227 08:55:41.954391 24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1227 08:55:41.954406 24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 08:55:41.954483 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:55:41.954681 24108 start.go:360] acquireMachinesLock for multinode-899276-m02: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1227 08:55:41.954734 24108 start.go:364] duration metric: took 30.88µs to acquireMachinesLock for "multinode-899276-m02"
I1227 08:55:41.954766 24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
I1227 08:55:41.954827 24108 start.go:125] createHost starting for "m02" (driver="kvm2")
I1227 08:55:41.956569 24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I1227 08:55:41.956662 24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
I1227 08:55:41.956692 24108 client.go:173] LocalClient.Create starting
I1227 08:55:41.956761 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
I1227 08:55:41.956803 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:55:41.956824 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:55:41.956873 24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
I1227 08:55:41.956892 24108 main.go:144] libmachine: Decoding PEM data...
I1227 08:55:41.956910 24108 main.go:144] libmachine: Parsing certificate...
I1227 08:55:41.957088 24108 main.go:144] libmachine: creating domain...
I1227 08:55:41.957098 24108 main.go:144] libmachine: creating network...
I1227 08:55:41.958253 24108 main.go:144] libmachine: found existing default network
I1227 08:55:41.958505 24108 main.go:144] libmachine: <network connections='1'>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1227 08:55:41.958687 24108 main.go:144] libmachine: found existing mk-multinode-899276 private network, skipping creation
I1227 08:55:41.958885 24108 main.go:144] libmachine: <network>
<name>mk-multinode-899276</name>
<uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:7e:96:0f'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
<host mac='52:54:00:4c:5c:b4' name='multinode-899276' ip='192.168.39.24'/>
</dhcp>
</ip>
</network>
I1227 08:55:41.959076 24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
I1227 08:55:41.959099 24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
I1227 08:55:41.959107 24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:55:41.959186 24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
I1227 08:55:42.180540 24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa...
I1227 08:55:42.254861 24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk...
I1227 08:55:42.254917 24108 main.go:144] libmachine: Writing magic tar header
I1227 08:55:42.254943 24108 main.go:144] libmachine: Writing SSH key tar header
I1227 08:55:42.255061 24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
I1227 08:55:42.255137 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02
I1227 08:55:42.255165 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 (perms=drwx------)
I1227 08:55:42.255182 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
I1227 08:55:42.255201 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
I1227 08:55:42.255216 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
I1227 08:55:42.255227 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
I1227 08:55:42.255238 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
I1227 08:55:42.255257 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
I1227 08:55:42.255282 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1227 08:55:42.255298 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1227 08:55:42.255318 24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
I1227 08:55:42.255333 24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1227 08:55:42.255348 24108 main.go:144] libmachine: checking permissions on dir: /home
I1227 08:55:42.255359 24108 main.go:144] libmachine: skipping /home - not owner
I1227 08:55:42.255363 24108 main.go:144] libmachine: defining domain...
I1227 08:55:42.256580 24108 main.go:144] libmachine: defining domain using XML:
<domain type='kvm'>
<name>multinode-899276-m02</name>
<memory unit='MiB'>3072</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1227 08:55:42.265000 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:b3:04:b6 in network default
I1227 08:55:42.265650 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:42.265669 24108 main.go:144] libmachine: starting domain...
I1227 08:55:42.265674 24108 main.go:144] libmachine: ensuring networks are active...
I1227 08:55:42.266690 24108 main.go:144] libmachine: Ensuring network default is active
I1227 08:55:42.267245 24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
I1227 08:55:42.267833 24108 main.go:144] libmachine: getting domain XML...
I1227 08:55:42.269145 24108 main.go:144] libmachine: starting domain XML:
<domain type='kvm'>
<name>multinode-899276-m02</name>
<uuid>08f0927e-00b1-40b5-b768-ac07d0776e28</uuid>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:9b:0b:64'/>
<source network='mk-multinode-899276'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:b3:04:b6'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1227 08:55:43.575420 24108 main.go:144] libmachine: waiting for domain to start...
I1227 08:55:43.576915 24108 main.go:144] libmachine: domain is now running
I1227 08:55:43.576935 24108 main.go:144] libmachine: waiting for IP...
I1227 08:55:43.577720 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:43.578257 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:43.578273 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:43.578564 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:43.833127 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:43.833729 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:43.833744 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:43.834083 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:44.161636 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:44.162394 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:44.162413 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:44.162749 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:44.477602 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:44.478263 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:44.478282 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:44.478685 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:44.857427 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:44.858004 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:44.858026 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:44.858397 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:45.619396 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:45.619938 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:45.619953 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:45.620268 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:46.214206 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:46.214738 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:46.214760 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:46.215107 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:47.368589 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:47.369148 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:47.369169 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:47.369473 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:48.790105 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:48.790775 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:48.790792 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:48.791137 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:50.057612 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:50.058205 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:50.058230 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:50.058563 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:51.571769 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:51.572501 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:51.572522 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:51.572969 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:54.369906 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:54.370596 24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
I1227 08:55:54.370610 24108 main.go:144] libmachine: trying to list again with source=arp
I1227 08:55:54.370961 24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
I1227 08:55:57.241023 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.241672 24108 main.go:144] libmachine: domain multinode-899276-m02 has current primary IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.241689 24108 main.go:144] libmachine: found domain IP: 192.168.39.160
I1227 08:55:57.241696 24108 main.go:144] libmachine: reserving static IP address...
I1227 08:55:57.242083 24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276-m02", mac: "52:54:00:9b:0b:64", ip: "192.168.39.160"} in network mk-multinode-899276
I1227 08:55:57.450637 24108 main.go:144] libmachine: reserved static IP address 192.168.39.160 for domain multinode-899276-m02
I1227 08:55:57.450661 24108 main.go:144] libmachine: waiting for SSH...
I1227 08:55:57.450668 24108 main.go:144] libmachine: Getting to WaitForSSH function...
I1227 08:55:57.453744 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.454265 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.454291 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.454489 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.454732 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.454744 24108 main.go:144] libmachine: About to run SSH command:
exit 0
I1227 08:55:57.569604 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:55:57.570099 24108 main.go:144] libmachine: domain creation complete
I1227 08:55:57.571770 24108 machine.go:94] provisionDockerMachine start ...
I1227 08:55:57.574152 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.574608 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.574633 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.574862 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.575132 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.575147 24108 main.go:144] libmachine: About to run SSH command:
hostname
I1227 08:55:57.686687 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
I1227 08:55:57.686742 24108 buildroot.go:166] provisioning hostname "multinode-899276-m02"
I1227 08:55:57.689982 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.690439 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.690482 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.690712 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.690987 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.691006 24108 main.go:144] libmachine: About to run SSH command:
sudo hostname multinode-899276-m02 && echo "multinode-899276-m02" | sudo tee /etc/hostname
I1227 08:55:57.825642 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276-m02
I1227 08:55:57.828982 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.829434 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.829471 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.829664 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:57.829868 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:57.829883 24108 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-899276-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-899276-m02' | sudo tee -a /etc/hosts;
fi
fi
I1227 08:55:57.955353 24108 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 08:55:57.955387 24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
I1227 08:55:57.955404 24108 buildroot.go:174] setting up certificates
I1227 08:55:57.955412 24108 provision.go:84] configureAuth start
I1227 08:55:57.958329 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.958721 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.958743 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.961212 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.961604 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:57.961634 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:57.961769 24108 provision.go:143] copyHostCerts
I1227 08:55:57.961801 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:55:57.961840 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
I1227 08:55:57.961853 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
I1227 08:55:57.961943 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
I1227 08:55:57.962064 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:55:57.962093 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
I1227 08:55:57.962101 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
I1227 08:55:57.962149 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
I1227 08:55:57.962220 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:55:57.962245 24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
I1227 08:55:57.962253 24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
I1227 08:55:57.962290 24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
I1227 08:55:57.962357 24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276-m02 san=[127.0.0.1 192.168.39.160 localhost minikube multinode-899276-m02]
I1227 08:55:58.062355 24108 provision.go:177] copyRemoteCerts
I1227 08:55:58.062418 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 08:55:58.065702 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.066127 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.066154 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.066319 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
I1227 08:55:58.156852 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 08:55:58.156925 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 08:55:58.186973 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 08:55:58.187035 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I1227 08:55:58.216314 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 08:55:58.216378 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 08:55:58.250146 24108 provision.go:87] duration metric: took 294.721391ms to configureAuth
I1227 08:55:58.250177 24108 buildroot.go:189] setting minikube options for container-runtime
I1227 08:55:58.250357 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:55:58.252989 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.253461 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.253487 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.253690 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:58.253921 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:58.253934 24108 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 08:55:58.373697 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
I1227 08:55:58.373723 24108 buildroot.go:70] root file system type: tmpfs
I1227 08:55:58.373873 24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 08:55:58.376713 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.377114 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.377139 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.377329 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:58.377512 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:58.377555 24108 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment="NO_PROXY=192.168.39.24"
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 08:55:58.508330 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment=NO_PROXY=192.168.39.24
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 08:55:58.511413 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.511851 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:58.511879 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:58.512069 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:58.512332 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:58.512351 24108 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 08:55:59.431853 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
I1227 08:55:59.431877 24108 machine.go:97] duration metric: took 1.86008098s to provisionDockerMachine
I1227 08:55:59.431888 24108 client.go:176] duration metric: took 17.475186189s to LocalClient.Create
I1227 08:55:59.431902 24108 start.go:167] duration metric: took 17.47524121s to libmachine.API.Create "multinode-899276"
I1227 08:55:59.431909 24108 start.go:293] postStartSetup for "multinode-899276-m02" (driver="kvm2")
I1227 08:55:59.431918 24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 08:55:59.431968 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 08:55:59.434620 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.435132 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.435167 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.435355 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
I1227 08:55:59.525674 24108 ssh_runner.go:195] Run: cat /etc/os-release
I1227 08:55:59.530511 24108 info.go:137] Remote host: Buildroot 2025.02
I1227 08:55:59.530547 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
I1227 08:55:59.530632 24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
I1227 08:55:59.530706 24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
I1227 08:55:59.530716 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
I1227 08:55:59.530821 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 08:55:59.542821 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:55:59.573575 24108 start.go:296] duration metric: took 141.651568ms for postStartSetup
I1227 08:55:59.576745 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.577190 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.577225 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.577486 24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
I1227 08:55:59.577738 24108 start.go:128] duration metric: took 17.622900484s to createHost
I1227 08:55:59.579881 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.580246 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.580267 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.580524 24108 main.go:144] libmachine: Using SSH client type: native
I1227 08:55:59.580736 24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.160 22 <nil> <nil>}
I1227 08:55:59.580748 24108 main.go:144] libmachine: About to run SSH command:
date +%s.%N
I1227 08:55:59.695810 24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825759.656998713
I1227 08:55:59.695838 24108 fix.go:216] guest clock: 1766825759.656998713
I1227 08:55:59.695847 24108 fix.go:229] Guest: 2025-12-27 08:55:59.656998713 +0000 UTC Remote: 2025-12-27 08:55:59.577753428 +0000 UTC m=+82.275426938 (delta=79.245285ms)
I1227 08:55:59.695869 24108 fix.go:200] guest clock delta is within tolerance: 79.245285ms
I1227 08:55:59.695877 24108 start.go:83] releasing machines lock for "multinode-899276-m02", held for 17.741133225s
I1227 08:55:59.698823 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.699365 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.699403 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.701968 24108 out.go:179] * Found network options:
I1227 08:55:59.703396 24108 out.go:179] - NO_PROXY=192.168.39.24
W1227 08:55:59.704647 24108 proxy.go:120] fail to check proxy env: Error ip not in block
W1227 08:55:59.705042 24108 proxy.go:120] fail to check proxy env: Error ip not in block
I1227 08:55:59.705131 24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1227 08:55:59.705131 24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 08:55:59.708339 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.708387 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.708760 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.708817 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:55:59.708844 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.708889 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:55:59.709024 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
I1227 08:55:59.709228 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
W1227 08:55:59.793520 24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 08:55:59.793609 24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 08:55:59.816238 24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1227 08:55:59.816269 24108 start.go:496] detecting cgroup driver to use...
I1227 08:55:59.816301 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:55:59.816397 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:55:59.839936 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 08:55:59.852570 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 08:55:59.865005 24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 08:55:59.865103 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 08:55:59.877853 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:55:59.890799 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 08:55:59.903794 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 08:55:59.916281 24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 08:55:59.929816 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 08:55:59.942187 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 08:55:59.955245 24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 08:55:59.968552 24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 08:55:59.979484 24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1227 08:55:59.979563 24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1227 08:55:59.993561 24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 08:56:00.006240 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:00.152118 24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 08:56:00.190124 24108 start.go:496] detecting cgroup driver to use...
I1227 08:56:00.190172 24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1227 08:56:00.190230 24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 08:56:00.211952 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:56:00.237208 24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 08:56:00.259010 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 08:56:00.275879 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:56:00.293605 24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 08:56:00.326414 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 08:56:00.342364 24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 08:56:00.365931 24108 ssh_runner.go:195] Run: which cri-dockerd
I1227 08:56:00.370257 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 08:56:00.382716 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 08:56:00.404739 24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 08:56:00.548335 24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 08:56:00.689510 24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 08:56:00.689570 24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 08:56:00.729510 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 08:56:00.746884 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:00.890844 24108 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 08:56:01.355108 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 08:56:01.370599 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 08:56:01.386540 24108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I1227 08:56:01.404096 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:56:01.419794 24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 08:56:01.561520 24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 08:56:01.708164 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:01.863090 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 08:56:01.899043 24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 08:56:01.915288 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:02.062800 24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 08:56:02.174498 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 08:56:02.198066 24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 08:56:02.198172 24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 08:56:02.204239 24108 start.go:574] Will wait 60s for crictl version
I1227 08:56:02.204318 24108 ssh_runner.go:195] Run: which crictl
I1227 08:56:02.208415 24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1227 08:56:02.242462 24108 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.2
RuntimeApiVersion: v1
I1227 08:56:02.242547 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:56:02.272210 24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 08:56:02.305864 24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
I1227 08:56:02.307155 24108 out.go:179] - env NO_PROXY=192.168.39.24
I1227 08:56:02.310958 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:56:02.311334 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
I1227 08:56:02.311356 24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
I1227 08:56:02.311519 24108 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1227 08:56:02.316034 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:56:02.330706 24108 mustload.go:66] Loading cluster: multinode-899276
I1227 08:56:02.330927 24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:56:02.332363 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:56:02.332574 24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.160
I1227 08:56:02.332593 24108 certs.go:195] generating shared ca certs ...
I1227 08:56:02.332615 24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 08:56:02.332749 24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
I1227 08:56:02.332808 24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
I1227 08:56:02.332826 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 08:56:02.332851 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 08:56:02.332871 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 08:56:02.332887 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 08:56:02.332965 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
W1227 08:56:02.333010 24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
I1227 08:56:02.333027 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
I1227 08:56:02.333079 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
I1227 08:56:02.333119 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
I1227 08:56:02.333153 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
I1227 08:56:02.333216 24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
I1227 08:56:02.333264 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.333285 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
I1227 08:56:02.333302 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
I1227 08:56:02.333328 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 08:56:02.365645 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1227 08:56:02.395629 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 08:56:02.425519 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 08:56:02.455554 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 08:56:02.486238 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
I1227 08:56:02.515842 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
I1227 08:56:02.545758 24108 ssh_runner.go:195] Run: openssl version
I1227 08:56:02.552395 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
I1227 08:56:02.564618 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
I1227 08:56:02.577235 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
I1227 08:56:02.582685 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
I1227 08:56:02.582759 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
I1227 08:56:02.590482 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 08:56:02.601896 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
I1227 08:56:02.613606 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.625518 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 08:56:02.637508 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.642823 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.642901 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 08:56:02.650764 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 08:56:02.663547 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 08:56:02.675853 24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
I1227 08:56:02.688458 24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
I1227 08:56:02.701658 24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
I1227 08:56:02.706958 24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
I1227 08:56:02.707033 24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
I1227 08:56:02.714242 24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 08:56:02.726789 24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
I1227 08:56:02.740816 24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 08:56:02.745870 24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 08:56:02.745924 24108 kubeadm.go:935] updating node {m02 192.168.39.160 8443 v1.35.0 docker false true} ...
I1227 08:56:02.746010 24108 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 08:56:02.746115 24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 08:56:02.758129 24108 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
Initiating transfer...
I1227 08:56:02.758244 24108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
I1227 08:56:02.770426 24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
I1227 08:56:02.770451 24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
I1227 08:56:02.770474 24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 08:56:02.770479 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm -> /var/lib/minikube/binaries/v1.35.0/kubeadm
I1227 08:56:02.770428 24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
I1227 08:56:02.770532 24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
I1227 08:56:02.770547 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl -> /var/lib/minikube/binaries/v1.35.0/kubectl
I1227 08:56:02.770638 24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
I1227 08:56:02.775599 24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
I1227 08:56:02.775636 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
I1227 08:56:02.800423 24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet -> /var/lib/minikube/binaries/v1.35.0/kubelet
I1227 08:56:02.800448 24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
I1227 08:56:02.800474 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
I1227 08:56:02.800530 24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
I1227 08:56:02.847555 24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
I1227 08:56:02.847596 24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
I1227 08:56:03.589571 24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I1227 08:56:03.603768 24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1227 08:56:03.631212 24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 08:56:03.655890 24108 ssh_runner.go:195] Run: grep 192.168.39.24 control-plane.minikube.internal$ /etc/hosts
I1227 08:56:03.660915 24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 08:56:03.680065 24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 08:56:03.823402 24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 08:56:03.862307 24108 host.go:66] Checking if "multinode-899276" exists ...
I1227 08:56:03.862561 24108 start.go:318] joinCluster: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 08:56:03.862676 24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
I1227 08:56:03.865388 24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:56:03.865858 24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
I1227 08:56:03.865900 24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
I1227 08:56:03.866073 24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
I1227 08:56:04.026904 24108 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
I1227 08:56:04.027011 24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9k0kod.6geqtmlyqvlg3686 --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-899276-m02"
I1227 08:56:04.959833 24108 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
I1227 08:56:05.276831 24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false
I1227 08:56:05.365119 24108 start.go:320] duration metric: took 1.502556165s to joinCluster
I1227 08:56:05.367341 24108 out.go:203]
W1227 08:56:05.368707 24108 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
stdout:
stderr:
Error from server (NotFound): nodes "multinode-899276-m02" not found
W1227 08:56:05.368724 24108 out.go:285] *
W1227 08:56:05.369029 24108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 08:56:05.370349 24108 out.go:203]
==> Docker <==
Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.484295147Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.484309203Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.498172293Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
Dec 27 08:55:04 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:04.998776948Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.109632332Z" level=info msg="Loading containers: start."
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.247245769Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.377426026Z" level=info msg="Loading containers: done."
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.391637269Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.391811290Z" level=info msg="Initializing buildkit"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.413046081Z" level=info msg="Completed buildkit initialization"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419503264Z" level=info msg="Daemon has completed initialization"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419576305Z" level=info msg="API listen on /var/run/docker.sock"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419733300Z" level=info msg="API listen on /run/docker.sock"
Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419775153Z" level=info msg="API listen on [::]:2376"
Dec 27 08:55:05 multinode-899276 systemd[1]: Started Docker Application Container Engine.
Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6e78e0ce85e8fe5edb8277132aa64d3c6e7b854ca063f186efe83036788a703/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84314fd3b6e4330cc6b60d3efa4271b1b31c8f7297dbc6f7810f7d4222821a3c/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/01c9987cccbc7847d3b2300457909a1b20a5c3ab68ebdcb2787f46b9223e82fe/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e30cff9be5d8f21e22f56e32fdf4665f38efb1df6a4b4088fd9482e8e3f11b25/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:19 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Dec 27 08:55:21 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d6ec4f5debfedd33fc26996965caee4b0790894833f749df68708096cc935f1/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:21 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b5c9d2f69beb277a5fa8a92c4c1be6942492e1323ecd969f21893fb56053bd2/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:25 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:25Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88: Status: Downloaded newer image for kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"
Dec 27 08:55:41 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0e0169a737f1b2eff8f1daf82ec9040343a68bccda0dbcd16c6ebd9a120493b2/resolv.conf as [nameserver 192.168.122.1]"
Dec 27 08:55:41 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bed607b026b4fde1069a1cde835d4fb71c333fa7c430321acf31a9a7b911f0b/resolv.conf as [nameserver 192.168.122.1]"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
6895d0c824741 aa5e3ebc0dfed 25 seconds ago Running coredns 0 0e0169a737f1b coredns-7d764666f9-952ns kube-system
12a2f3326d0f4 6e38f40d628db 25 seconds ago Running storage-provisioner 0 5bed607b026b4 storage-provisioner kube-system
a7b61d118b3f1 kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae 41 seconds ago Running kindnet-cni 0 4b5c9d2f69beb kindnet-mgnsl kube-system
d50ff81fb41a6 32652ff1bbe6b 45 seconds ago Running kube-proxy 0 4d6ec4f5debfe kube-proxy-rrb2x kube-system
806a4f701d170 2c9a4b058bd7e 56 seconds ago Running kube-controller-manager 0 e30cff9be5d8f kube-controller-manager-multinode-899276 kube-system
8f2fcc85e5e1f 550794e3b12ac 56 seconds ago Running kube-scheduler 0 01c9987cccbc7 kube-scheduler-multinode-899276 kube-system
14fb1b4cc933a 5c6acd67e9cd1 56 seconds ago Running kube-apiserver 0 84314fd3b6e43 kube-apiserver-multinode-899276 kube-system
4ca9b8bb650e0 0a108f7189562 56 seconds ago Running etcd 0 d6e78e0ce85e8 etcd-multinode-899276 kube-system
==> coredns [6895d0c82474] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.13.1
linux/amd64, go1.25.2, 1db4568
[INFO] 127.0.0.1:51366 - 39875 "HINFO IN 597089617242721093.8521952542865293643. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.126758929s
==> describe nodes <==
Name: multinode-899276
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-899276
kubernetes.io/os=linux
minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
minikube.k8s.io/name=multinode-899276
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 27 Dec 2025 08:55:12 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-899276
AcquireTime: <unset>
RenewTime: Sat, 27 Dec 2025 08:55:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 27 Dec 2025 08:55:46 +0000 Sat, 27 Dec 2025 08:55:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 27 Dec 2025 08:55:46 +0000 Sat, 27 Dec 2025 08:55:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 27 Dec 2025 08:55:46 +0000 Sat, 27 Dec 2025 08:55:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 27 Dec 2025 08:55:46 +0000 Sat, 27 Dec 2025 08:55:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.24
Hostname: multinode-899276
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035912Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035912Ki
pods: 110
System Info:
Machine ID: 6d370929938249538ba64fb6eca3e648
System UUID: 6d370929-9382-4953-8ba6-4fb6eca3e648
Boot ID: e7571780-ff7a-4d59-887f-f7dbfc0c1beb
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.2
Kubelet Version: v1.35.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7d764666f9-952ns 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 46s
kube-system etcd-multinode-899276 100m (5%) 0 (0%) 100Mi (3%) 0 (0%) 53s
kube-system kindnet-mgnsl 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 46s
kube-system kube-apiserver-multinode-899276 250m (12%) 0 (0%) 0 (0%) 0 (0%) 52s
kube-system kube-controller-manager-multinode-899276 200m (10%) 0 (0%) 0 (0%) 0 (0%) 51s
kube-system kube-proxy-rrb2x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46s
kube-system kube-scheduler-multinode-899276 100m (5%) 0 (0%) 0 (0%) 0 (0%) 53s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 100m (5%)
memory 220Mi (7%) 220Mi (7%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal RegisteredNode 47s node-controller Node multinode-899276 event: Registered Node multinode-899276 in Controller
Name: multinode-899276-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-899276-m02
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 27 Dec 2025 08:56:05 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease: Failed to get lease: leases.coordination.k8s.io "multinode-899276-m02" not found
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 27 Dec 2025 08:56:05 +0000 Sat, 27 Dec 2025 08:56:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 27 Dec 2025 08:56:05 +0000 Sat, 27 Dec 2025 08:56:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 27 Dec 2025 08:56:05 +0000 Sat, 27 Dec 2025 08:56:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 27 Dec 2025 08:56:05 +0000 Sat, 27 Dec 2025 08:56:05 +0000 KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, CSINode is not yet initialized]
Addresses:
InternalIP: 192.168.39.160
Hostname: multinode-899276-m02
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035912Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035912Ki
pods: 110
System Info:
Machine ID: 08f0927e00b140b5b768ac07d0776e28
System UUID: 08f0927e-00b1-40b5-b768-ac07d0776e28
Boot ID: 1d4ac048-9867-48e6-96eb-9e9bc0666768
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.2
Kubelet Version: v1.35.0
Kube-Proxy Version:
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-4pk8r 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 1s
kube-system kube-proxy-xhrn8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%) 100m (5%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
==> dmesg <==
[Dec27 08:54] Booted with the nomodeset parameter. Only the system framebuffer will be available
[ +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.000043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.001306] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
[ +1.170243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.117819] kauditd_printk_skb: 1 callbacks suppressed
[Dec27 08:55] kauditd_printk_skb: 373 callbacks suppressed
[ +0.102827] kauditd_printk_skb: 205 callbacks suppressed
[ +0.160897] kauditd_printk_skb: 221 callbacks suppressed
[ +0.244934] kauditd_printk_skb: 18 callbacks suppressed
[ +4.325682] kauditd_printk_skb: 165 callbacks suppressed
[ +14.621191] kauditd_printk_skb: 2 callbacks suppressed
==> etcd [4ca9b8bb650e] <==
{"level":"info","ts":"2025-12-27T08:55:10.784336Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 2"}
{"level":"info","ts":"2025-12-27T08:55:10.784372Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"602226ed500416f5 has received 1 MsgVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-12-27T08:55:10.785974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"602226ed500416f5 became leader at term 2"}
{"level":"info","ts":"2025-12-27T08:55:10.786004Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 2"}
{"level":"info","ts":"2025-12-27T08:55:10.791476Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
{"level":"info","ts":"2025-12-27T08:55:10.793884Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:multinode-899276 ClientURLs:[https://192.168.39.24:2379]}","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
{"level":"info","ts":"2025-12-27T08:55:10.794043Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-12-27T08:55:10.793909Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-12-27T08:55:10.795404Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-12-27T08:55:10.799763Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-12-27T08:55:10.802567Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-12-27T08:55:10.802644Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-12-27T08:55:10.804819Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.6"}
{"level":"info","ts":"2025-12-27T08:55:10.805072Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
{"level":"info","ts":"2025-12-27T08:55:10.805735Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
{"level":"info","ts":"2025-12-27T08:55:10.805926Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
{"level":"info","ts":"2025-12-27T08:55:10.807174Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
{"level":"info","ts":"2025-12-27T08:55:10.815395Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
{"level":"info","ts":"2025-12-27T08:55:10.816576Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2025-12-27T08:56:04.877177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.572626ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-27T08:56:04.877302Z","caller":"traceutil/trace.go:172","msg":"trace[1557608848] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:454; }","duration":"205.776267ms","start":"2025-12-27T08:56:04.671511Z","end":"2025-12-27T08:56:04.877287Z","steps":["trace[1557608848] 'range keys from in-memory index tree' (duration: 205.559438ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-27T08:56:04.877487Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.4767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-27T08:56:04.877538Z","caller":"traceutil/trace.go:172","msg":"trace[1875828016] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:454; }","duration":"245.507791ms","start":"2025-12-27T08:56:04.631992Z","end":"2025-12-27T08:56:04.877500Z","steps":["trace[1875828016] 'agreement among raft nodes before linearized reading' (duration: 92.674358ms)","trace[1875828016] 'range keys from in-memory index tree' (duration: 152.742931ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-27T08:56:04.878377Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.056298ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654399270533750011 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-tz6w5\" mod_revision:454 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-tz6w5\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-tz6w5\" > >>","response":"size:16"}
{"level":"info","ts":"2025-12-27T08:56:04.878902Z","caller":"traceutil/trace.go:172","msg":"trace[1051096777] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"247.420304ms","start":"2025-12-27T08:56:04.631468Z","end":"2025-12-27T08:56:04.878888Z","steps":["trace[1051096777] 'process raft request' (duration: 93.279326ms)","trace[1051096777] 'compare' (duration: 152.870907ms)"],"step_count":2}
==> kernel <==
08:56:06 up 1 min, 0 users, load average: 0.95, 0.35, 0.12
Linux multinode-899276 6.6.95 #1 SMP PREEMPT_DYNAMIC Fri Dec 26 06:43:12 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kindnet [a7b61d118b3f] <==
I1227 08:55:25.911665 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1227 08:55:25.912075 1 main.go:139] hostIP = 192.168.39.24
podIP = 192.168.39.24
I1227 08:55:25.912269 1 main.go:148] setting mtu 1500 for CNI
I1227 08:55:25.912304 1 main.go:178] kindnetd IP family: "ipv4"
I1227 08:55:25.912324 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-12-27T08:55:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1227 08:55:26.215408 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1227 08:55:26.215439 1 controller.go:381] "Waiting for informer caches to sync"
I1227 08:55:26.215448 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1227 08:55:26.216460 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1227 08:55:26.606893 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1227 08:55:26.606942 1 metrics.go:72] Registering metrics
I1227 08:55:26.607009 1 controller.go:711] "Syncing nftables rules"
I1227 08:55:36.214731 1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
I1227 08:55:36.214869 1 main.go:301] handling current node
I1227 08:55:46.214591 1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
I1227 08:55:46.214648 1 main.go:301] handling current node
I1227 08:55:56.217888 1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
I1227 08:55:56.217992 1 main.go:301] handling current node
I1227 08:56:06.214496 1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
I1227 08:56:06.214540 1 main.go:301] handling current node
I1227 08:56:06.214556 1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
I1227 08:56:06.214568 1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24]
I1227 08:56:06.214996 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.160 Flags: [] Table: 0 Realm: 0}
==> kube-apiserver [14fb1b4cc933] <==
I1227 08:55:12.412187 1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
I1227 08:55:12.412274 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1227 08:55:12.415410 1 controller.go:667] quota admission added evaluator for: namespaces
I1227 08:55:12.422763 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1227 08:55:12.473480 1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1227 08:55:12.477513 1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
I1227 08:55:12.497235 1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1227 08:55:12.504256 1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
I1227 08:55:13.220614 1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
I1227 08:55:13.225535 1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
I1227 08:55:13.225752 1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
I1227 08:55:13.980887 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1227 08:55:14.037402 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1227 08:55:14.121453 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W1227 08:55:14.128526 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.24]
I1227 08:55:14.129442 1 controller.go:667] quota admission added evaluator for: endpoints
I1227 08:55:14.135088 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1227 08:55:14.269225 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1227 08:55:15.386610 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1227 08:55:15.428640 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I1227 08:55:15.441371 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1227 08:55:19.919728 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1227 08:55:20.223365 1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1227 08:55:20.228936 1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1227 08:55:20.270234 1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [806a4f701d17] <==
I1227 08:55:19.089146 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.106770 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.123368 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.129444 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.129530 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.151864 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.155426 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.155501 1 range_allocator.go:177] "Sending events to api server"
I1227 08:55:19.155519 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.155544 1 range_allocator.go:181] "Starting range CIDR allocator"
I1227 08:55:19.155550 1 shared_informer.go:370] "Waiting for caches to sync"
I1227 08:55:19.155554 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.155636 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.167077 1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276" podCIDRs=["10.244.0.0/24"]
I1227 08:55:19.172639 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.176175 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.176447 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.179607 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.196290 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.208898 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:19.208913 1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
I1227 08:55:19.208917 1 garbagecollector.go:169] "Proceeding to collect garbage"
I1227 08:55:44.094465 1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
I1227 08:56:05.429119 1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899276-m02\" does not exist"
I1227 08:56:05.458174 1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276-m02" podCIDRs=["10.244.1.0/24"]
==> kube-proxy [d50ff81fb41a] <==
I1227 08:55:21.628068 1 shared_informer.go:370] "Waiting for caches to sync"
I1227 08:55:21.731947 1 shared_informer.go:377] "Caches are synced"
I1227 08:55:21.731996 1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
E1227 08:55:21.739671 1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1227 08:55:21.830226 1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1227 08:55:21.830342 1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1227 08:55:21.830404 1 server_linux.go:136] "Using iptables Proxier"
I1227 08:55:21.839592 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1227 08:55:21.840293 1 server.go:529] "Version info" version="v1.35.0"
I1227 08:55:21.840321 1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1227 08:55:21.842846 1 config.go:200] "Starting service config controller"
I1227 08:55:21.842864 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1227 08:55:21.842880 1 config.go:106] "Starting endpoint slice config controller"
I1227 08:55:21.842884 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1227 08:55:21.842909 1 config.go:403] "Starting serviceCIDR config controller"
I1227 08:55:21.842915 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1227 08:55:21.846740 1 config.go:309] "Starting node config controller"
I1227 08:55:21.846890 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1227 08:55:21.942963 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1227 08:55:21.943020 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1227 08:55:21.943138 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1227 08:55:21.948504 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-scheduler [8f2fcc85e5e1] <==
E1227 08:55:12.377527 1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
E1227 08:55:12.379893 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
E1227 08:55:12.380089 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
E1227 08:55:12.380428 1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
E1227 08:55:12.381099 1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
E1227 08:55:12.381174 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
E1227 08:55:12.384043 1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
E1227 08:55:12.384255 1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
E1227 08:55:13.242305 1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
E1227 08:55:13.257422 1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
E1227 08:55:13.303156 1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
E1227 08:55:13.319157 1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
E1227 08:55:13.362023 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
E1227 08:55:13.362795 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
E1227 08:55:13.411755 1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
E1227 08:55:13.420451 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
E1227 08:55:13.431365 1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
E1227 08:55:13.480845 1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
E1227 08:55:13.542450 1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
E1227 08:55:13.554908 1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
E1227 08:55:13.560944 1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
E1227 08:55:13.650997 1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
E1227 08:55:13.693380 1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
E1227 08:55:13.694477 1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
I1227 08:55:16.332120 1 shared_informer.go:377] "Caches are synced"
==> kubelet <==
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354785 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a93db4ef-7986-43f9-820c-2b117c90fd1a-lib-modules\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354867 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xjf\" (UniqueName: \"kubernetes.io/projected/7ca87068-e672-4641-bc6e-b04591e75a10-kube-api-access-m8xjf\") pod \"kindnet-mgnsl\" (UID: \"7ca87068-e672-4641-bc6e-b04591e75a10\") " pod="kube-system/kindnet-mgnsl"
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354890 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a93db4ef-7986-43f9-820c-2b117c90fd1a-xtables-lock\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354942 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7ca87068-e672-4641-bc6e-b04591e75a10-cni-cfg\") pod \"kindnet-mgnsl\" (UID: \"7ca87068-e672-4641-bc6e-b04591e75a10\") " pod="kube-system/kindnet-mgnsl"
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354968 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca87068-e672-4641-bc6e-b04591e75a10-lib-modules\") pod \"kindnet-mgnsl\" (UID: \"7ca87068-e672-4641-bc6e-b04591e75a10\") " pod="kube-system/kindnet-mgnsl"
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.355059 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a93db4ef-7986-43f9-820c-2b117c90fd1a-kube-proxy\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.355121 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwv8\" (UniqueName: \"kubernetes.io/projected/a93db4ef-7986-43f9-820c-2b117c90fd1a-kube-api-access-wnwv8\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
Dec 27 08:55:22 multinode-899276 kubelet[2549]: E1227 08:55:22.165069 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-899276" containerName="etcd"
Dec 27 08:55:22 multinode-899276 kubelet[2549]: I1227 08:55:22.182334 2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rrb2x" podStartSLOduration=2.182320518 podStartE2EDuration="2.182320518s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:21.591305433 +0000 UTC m=+6.370036755" watchObservedRunningTime="2025-12-27 08:55:22.182320518 +0000 UTC m=+6.961051864"
Dec 27 08:55:23 multinode-899276 kubelet[2549]: E1227 08:55:23.868801 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
Dec 27 08:55:24 multinode-899276 kubelet[2549]: E1227 08:55:24.280199 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
Dec 27 08:55:26 multinode-899276 kubelet[2549]: I1227 08:55:26.685630 2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-mgnsl" podStartSLOduration=2.79150236 podStartE2EDuration="6.685618144s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="2025-12-27 08:55:21.301251881 +0000 UTC m=+6.079983198" lastFinishedPulling="2025-12-27 08:55:25.195367666 +0000 UTC m=+9.974098982" observedRunningTime="2025-12-27 08:55:26.683876008 +0000 UTC m=+11.462607343" watchObservedRunningTime="2025-12-27 08:55:26.685618144 +0000 UTC m=+11.464349467"
Dec 27 08:55:28 multinode-899276 kubelet[2549]: E1227 08:55:28.767005 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-multinode-899276" containerName="kube-controller-manager"
Dec 27 08:55:32 multinode-899276 kubelet[2549]: E1227 08:55:32.167933 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-899276" containerName="etcd"
Dec 27 08:55:33 multinode-899276 kubelet[2549]: E1227 08:55:33.875439 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
Dec 27 08:55:34 multinode-899276 kubelet[2549]: E1227 08:55:34.286744 2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.671822 2549 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789814 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2dd7f649-dfe6-4a2d-b321-673b664a5d1b-tmp\") pod \"storage-provisioner\" (UID: \"2dd7f649-dfe6-4a2d-b321-673b664a5d1b\") " pod="kube-system/storage-provisioner"
Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789865 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9a3c2-20bf-4e86-8443-702c47b3e04b-config-volume\") pod \"coredns-7d764666f9-952ns\" (UID: \"f0e9a3c2-20bf-4e86-8443-702c47b3e04b\") " pod="kube-system/coredns-7d764666f9-952ns"
Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789892 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pql\" (UniqueName: \"kubernetes.io/projected/f0e9a3c2-20bf-4e86-8443-702c47b3e04b-kube-api-access-l7pql\") pod \"coredns-7d764666f9-952ns\" (UID: \"f0e9a3c2-20bf-4e86-8443-702c47b3e04b\") " pod="kube-system/coredns-7d764666f9-952ns"
Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789911 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcsm\" (UniqueName: \"kubernetes.io/projected/2dd7f649-dfe6-4a2d-b321-673b664a5d1b-kube-api-access-pxcsm\") pod \"storage-provisioner\" (UID: \"2dd7f649-dfe6-4a2d-b321-673b664a5d1b\") " pod="kube-system/storage-provisioner"
Dec 27 08:55:41 multinode-899276 kubelet[2549]: E1227 08:55:41.773849 2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
Dec 27 08:55:41 multinode-899276 kubelet[2549]: I1227 08:55:41.819800 2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-952ns" podStartSLOduration=21.81978365 podStartE2EDuration="21.81978365s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:41.799893618 +0000 UTC m=+26.578624941" watchObservedRunningTime="2025-12-27 08:55:41.81978365 +0000 UTC m=+26.598514973"
Dec 27 08:55:42 multinode-899276 kubelet[2549]: E1227 08:55:42.792462 2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
Dec 27 08:55:43 multinode-899276 kubelet[2549]: E1227 08:55:43.808397 2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
==> storage-provisioner [12a2f3326d0f] <==
I1227 08:55:41.766523 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-899276_a520e26b-0b55-4f68-b7fe-7e70bd195afc!
W1227 08:55:43.681843 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:43.691726 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:45.695561 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:45.705037 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:47.709090 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:47.714272 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:49.718390 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:49.724111 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:51.732382 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:51.747336 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:53.753592 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:53.760593 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:55.768238 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:55.781660 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:57.787056 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:57.793152 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:59.799208 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:55:59.806534 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:56:01.811591 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:56:01.822788 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:56:03.826215 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:56:03.831532 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:56:05.835877 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1227 08:56:05.840742 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-899276 -n multinode-899276
helpers_test.go:270: (dbg) Run: kubectl --context multinode-899276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: kindnet-4pk8r kube-proxy-xhrn8
helpers_test.go:283: ======> post-mortem[TestMultiNode/serial/FreshStart2Nodes]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context multinode-899276 describe pod kindnet-4pk8r kube-proxy-xhrn8
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context multinode-899276 describe pod kindnet-4pk8r kube-proxy-xhrn8: exit status 1 (74.410128ms)
** stderr **
Error from server (NotFound): pods "kindnet-4pk8r" not found
Error from server (NotFound): pods "kube-proxy-xhrn8" not found
** /stderr **
helpers_test.go:288: kubectl --context multinode-899276 describe pod kindnet-4pk8r kube-proxy-xhrn8: exit status 1
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (90.02s)