Test Report: KVM_Linux 22344

                    
                      edd64449414ff518763defe8c5f2fdfa65b6a5d9:2025-12-27:43007
                    
                

Test fail (2/370)

Order failed test Duration
234 TestMultiNode/serial/FreshStart2Nodes 90.02
238 TestMultiNode/serial/MultiNodeLabels 1.58
x
+
TestMultiNode/serial/FreshStart2Nodes (90.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1227 08:55:12.704577    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : exit status 80 (1m28.080234182s)

                                                
                                                
-- stdout --
	* [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
	* Found network options:
	  - NO_PROXY=192.168.39.24
	  - env NO_PROXY=192.168.39.24
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:54:37.348894   24108 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:54:37.349196   24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:54:37.349207   24108 out.go:374] Setting ErrFile to fd 2...
	I1227 08:54:37.349214   24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:54:37.349401   24108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:54:37.349901   24108 out.go:368] Setting JSON to false
	I1227 08:54:37.350702   24108 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2227,"bootTime":1766823450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:54:37.350761   24108 start.go:143] virtualization: kvm guest
	I1227 08:54:37.352914   24108 out.go:179] * [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 08:54:37.354122   24108 notify.go:221] Checking for updates...
	I1227 08:54:37.354140   24108 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:54:37.355599   24108 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:54:37.356985   24108 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:54:37.358228   24108 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.359373   24108 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 08:54:37.360648   24108 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:54:37.362069   24108 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:54:37.398292   24108 out.go:179] * Using the kvm2 driver based on user configuration
	I1227 08:54:37.399595   24108 start.go:309] selected driver: kvm2
	I1227 08:54:37.399614   24108 start.go:928] validating driver "kvm2" against <nil>
	I1227 08:54:37.399634   24108 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:54:37.400332   24108 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 08:54:37.400590   24108 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 08:54:37.400626   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:54:37.400682   24108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1227 08:54:37.400692   24108 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 08:54:37.400744   24108 start.go:353] cluster config:
	{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:54:37.400897   24108 iso.go:125] acquiring lock: {Name:mkf3af0a60e6ccee2eeb813de50903ed5d7e8922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 08:54:37.402631   24108 out.go:179] * Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
	I1227 08:54:37.403816   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:54:37.403844   24108 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 08:54:37.403854   24108 cache.go:65] Caching tarball of preloaded images
	I1227 08:54:37.403951   24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 08:54:37.403967   24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 08:54:37.404346   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:54:37.404374   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json: {Name:mk5e07ed738ae868a23976588c175a8cb2b30a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:54:37.404563   24108 start.go:360] acquireMachinesLock for multinode-899276: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 08:54:37.404598   24108 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "multinode-899276"
	I1227 08:54:37.404622   24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 08:54:37.404675   24108 start.go:125] createHost starting for "" (driver="kvm2")
	I1227 08:54:37.407102   24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 08:54:37.407274   24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
	I1227 08:54:37.407306   24108 client.go:173] LocalClient.Create starting
	I1227 08:54:37.407365   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
	I1227 08:54:37.407409   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:54:37.407425   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:54:37.407478   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
	I1227 08:54:37.407496   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:54:37.407507   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:54:37.407806   24108 main.go:144] libmachine: creating domain...
	I1227 08:54:37.407817   24108 main.go:144] libmachine: creating network...
	I1227 08:54:37.409512   24108 main.go:144] libmachine: found existing default network
	I1227 08:54:37.409702   24108 main.go:144] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.410292   24108 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001caea70}
	I1227 08:54:37.410380   24108 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-multinode-899276</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.416200   24108 main.go:144] libmachine: creating private network mk-multinode-899276 192.168.39.0/24...
	I1227 08:54:37.484690   24108 main.go:144] libmachine: private network mk-multinode-899276 192.168.39.0/24 created
	I1227 08:54:37.484994   24108 main.go:144] libmachine: <network>
	  <name>mk-multinode-899276</name>
	  <uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7e:96:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.485088   24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
	I1227 08:54:37.485112   24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:54:37.485123   24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.485174   24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
	I1227 08:54:37.708878   24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa...
	I1227 08:54:37.789981   24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk...
	I1227 08:54:37.790024   24108 main.go:144] libmachine: Writing magic tar header
	I1227 08:54:37.790040   24108 main.go:144] libmachine: Writing SSH key tar header
	I1227 08:54:37.790127   24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
	I1227 08:54:37.790183   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276
	I1227 08:54:37.790204   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 (perms=drwx------)
	I1227 08:54:37.790215   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
	I1227 08:54:37.790225   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
	I1227 08:54:37.790238   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.790249   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
	I1227 08:54:37.790257   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
	I1227 08:54:37.790265   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
	I1227 08:54:37.790275   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1227 08:54:37.790287   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1227 08:54:37.790303   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1227 08:54:37.790313   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1227 08:54:37.790321   24108 main.go:144] libmachine: checking permissions on dir: /home
	I1227 08:54:37.790330   24108 main.go:144] libmachine: skipping /home - not owner
	I1227 08:54:37.790334   24108 main.go:144] libmachine: defining domain...
	I1227 08:54:37.792061   24108 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>multinode-899276</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:54:37.797217   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:e2:49:84 in network default
	I1227 08:54:37.797913   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:37.797931   24108 main.go:144] libmachine: starting domain...
	I1227 08:54:37.797936   24108 main.go:144] libmachine: ensuring networks are active...
	I1227 08:54:37.798746   24108 main.go:144] libmachine: Ensuring network default is active
	I1227 08:54:37.799132   24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
	I1227 08:54:37.799776   24108 main.go:144] libmachine: getting domain XML...
	I1227 08:54:37.800794   24108 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-899276</name>
	  <uuid>6d370929-9382-4953-8ba6-4fb6eca3e648</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4c:5c:b4'/>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e2:49:84'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:54:39.079279   24108 main.go:144] libmachine: waiting for domain to start...
	I1227 08:54:39.080610   24108 main.go:144] libmachine: domain is now running
	I1227 08:54:39.080624   24108 main.go:144] libmachine: waiting for IP...
	I1227 08:54:39.081451   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.082023   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.082037   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.082336   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.082377   24108 retry.go:84] will retry after 200ms: waiting for domain to come up
	I1227 08:54:39.326020   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.326723   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.326741   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.327098   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.575768   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.576511   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.576534   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.576883   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.876331   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.877091   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.877107   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.877413   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:40.370368   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:40.371069   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:40.371086   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:40.371431   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:40.865483   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:40.866211   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:40.866236   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:40.866603   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:41.484623   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:41.485260   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:41.485279   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:41.485638   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:42.393849   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:42.394445   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:42.394463   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:42.394914   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:43.319225   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:43.320003   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:43.320020   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:43.320334   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:44.724122   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:44.724874   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:44.724891   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:44.725237   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:46.322345   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:46.323107   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:46.323130   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:46.323457   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:48.157422   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:48.158091   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:48.158110   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:48.158455   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:51.501875   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:51.502515   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:51.502530   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:51.502791   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:51.502830   24108 retry.go:84] will retry after 4.3s: waiting for domain to come up
	I1227 08:54:55.837835   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:55.838577   24108 main.go:144] libmachine: domain multinode-899276 has current primary IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:55.838596   24108 main.go:144] libmachine: found domain IP: 192.168.39.24
	I1227 08:54:55.838605   24108 main.go:144] libmachine: reserving static IP address...
	I1227 08:54:55.839242   24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276", mac: "52:54:00:4c:5c:b4", ip: "192.168.39.24"} in network mk-multinode-899276
	I1227 08:54:56.025597   24108 main.go:144] libmachine: reserved static IP address 192.168.39.24 for domain multinode-899276
	I1227 08:54:56.025623   24108 main.go:144] libmachine: waiting for SSH...
	I1227 08:54:56.025631   24108 main.go:144] libmachine: Getting to WaitForSSH function...
	I1227 08:54:56.028518   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.029028   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.029077   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.029273   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.029482   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.029494   24108 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1227 08:54:56.143804   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:54:56.144248   24108 main.go:144] libmachine: domain creation complete
	I1227 08:54:56.146013   24108 machine.go:94] provisionDockerMachine start ...
	I1227 08:54:56.148712   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.149157   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.149206   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.149383   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.149565   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.149574   24108 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 08:54:56.263810   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1227 08:54:56.263841   24108 buildroot.go:166] provisioning hostname "multinode-899276"
	I1227 08:54:56.266910   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.267410   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.267435   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.267640   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.267847   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.267858   24108 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-899276 && echo "multinode-899276" | sudo tee /etc/hostname
	I1227 08:54:56.401325   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276
	
	I1227 08:54:56.404664   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.405235   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.405263   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.405433   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.405644   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.405659   24108 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 08:54:56.543193   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:54:56.543230   24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
	I1227 08:54:56.543264   24108 buildroot.go:174] setting up certificates
	I1227 08:54:56.543282   24108 provision.go:84] configureAuth start
	I1227 08:54:56.546171   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.546588   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.546612   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.548760   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.549114   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.549136   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.549243   24108 provision.go:143] copyHostCerts
	I1227 08:54:56.549266   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:54:56.549290   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
	I1227 08:54:56.549298   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:54:56.549370   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
	I1227 08:54:56.549490   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:54:56.549516   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
	I1227 08:54:56.549522   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:54:56.549548   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
	I1227 08:54:56.549593   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:54:56.549609   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
	I1227 08:54:56.549615   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:54:56.549634   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
	I1227 08:54:56.549680   24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-899276]
	I1227 08:54:56.564952   24108 provision.go:177] copyRemoteCerts
	I1227 08:54:56.565003   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 08:54:56.567240   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.567643   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.567677   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.567850   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:56.656198   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 08:54:56.656292   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 08:54:56.685216   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 08:54:56.685304   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1227 08:54:56.714733   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 08:54:56.714819   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 08:54:56.743305   24108 provision.go:87] duration metric: took 199.989326ms to configureAuth
	I1227 08:54:56.743338   24108 buildroot.go:189] setting minikube options for container-runtime
	I1227 08:54:56.743528   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:54:56.746235   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.746587   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.746606   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.746782   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.747027   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.747039   24108 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 08:54:56.861225   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 08:54:56.861255   24108 buildroot.go:70] root file system type: tmpfs
	I1227 08:54:56.861417   24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 08:54:56.864305   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.864731   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.864767   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.864925   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.865130   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.865170   24108 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 08:54:56.996399   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 08:54:56.999444   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.999882   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.999912   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.000156   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:57.000379   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:57.000396   24108 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 08:54:57.924795   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1227 08:54:57.924823   24108 machine.go:97] duration metric: took 1.778786884s to provisionDockerMachine
	I1227 08:54:57.924839   24108 client.go:176] duration metric: took 20.517522558s to LocalClient.Create
	I1227 08:54:57.924853   24108 start.go:167] duration metric: took 20.517578026s to libmachine.API.Create "multinode-899276"
	I1227 08:54:57.924862   24108 start.go:293] postStartSetup for "multinode-899276" (driver="kvm2")
	I1227 08:54:57.924874   24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 08:54:57.924962   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 08:54:57.927733   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.928188   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:57.928219   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.928364   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.017094   24108 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 08:54:58.021892   24108 info.go:137] Remote host: Buildroot 2025.02
	I1227 08:54:58.021927   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
	I1227 08:54:58.022001   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
	I1227 08:54:58.022108   24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
	I1227 08:54:58.022115   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
	I1227 08:54:58.022194   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 08:54:58.035018   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:54:58.064746   24108 start.go:296] duration metric: took 139.872084ms for postStartSetup
	I1227 08:54:58.067860   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.068279   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.068306   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.068579   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:54:58.068756   24108 start.go:128] duration metric: took 20.664071028s to createHost
	I1227 08:54:58.071566   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.072015   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.072040   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.072244   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:58.072473   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:58.072488   24108 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 08:54:58.187322   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825698.156416973
	
	I1227 08:54:58.187344   24108 fix.go:216] guest clock: 1766825698.156416973
	I1227 08:54:58.187351   24108 fix.go:229] Guest: 2025-12-27 08:54:58.156416973 +0000 UTC Remote: 2025-12-27 08:54:58.068766977 +0000 UTC m=+20.766440443 (delta=87.649996ms)
	I1227 08:54:58.187367   24108 fix.go:200] guest clock delta is within tolerance: 87.649996ms
	I1227 08:54:58.187371   24108 start.go:83] releasing machines lock for "multinode-899276", held for 20.782762567s
	I1227 08:54:58.189878   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.190311   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.190336   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.190848   24108 ssh_runner.go:195] Run: cat /version.json
	I1227 08:54:58.190934   24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 08:54:58.193909   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.193920   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194367   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.194393   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194412   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.194445   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194571   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.194749   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.303202   24108 ssh_runner.go:195] Run: systemctl --version
	I1227 08:54:58.309380   24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 08:54:58.315530   24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 08:54:58.315591   24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 08:54:58.335551   24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 08:54:58.335587   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:54:58.335615   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:54:58.335736   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:54:58.357443   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 08:54:58.369407   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 08:54:58.384702   24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 08:54:58.384807   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 08:54:58.399640   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:54:58.412464   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 08:54:58.424691   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:54:58.437707   24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 08:54:58.450402   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 08:54:58.462916   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 08:54:58.475650   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 08:54:58.493530   24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 08:54:58.504139   24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 08:54:58.504192   24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 08:54:58.516423   24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 08:54:58.528272   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:54:58.673716   24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 08:54:58.720867   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:54:58.720909   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:54:58.720972   24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 08:54:58.744526   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:54:58.764985   24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 08:54:58.785879   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:54:58.803205   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:54:58.821885   24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 08:54:58.856773   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:54:58.873676   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:54:58.896773   24108 ssh_runner.go:195] Run: which cri-dockerd
	I1227 08:54:58.901095   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 08:54:58.912977   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 08:54:58.935679   24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 08:54:59.087073   24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 08:54:59.235233   24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 08:54:59.235368   24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 08:54:59.257291   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:54:59.273342   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:54:59.413736   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:54:59.868087   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 08:54:59.883321   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 08:54:59.898581   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:54:59.913286   24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 08:55:00.062974   24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 08:55:00.214186   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:00.363957   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 08:55:00.400471   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 08:55:00.416741   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:00.560590   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 08:55:00.668182   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:55:00.687244   24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 08:55:00.687326   24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 08:55:00.693883   24108 start.go:574] Will wait 60s for crictl version
	I1227 08:55:00.693968   24108 ssh_runner.go:195] Run: which crictl
	I1227 08:55:00.698083   24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 08:55:00.732884   24108 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1227 08:55:00.732961   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:55:00.764467   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:55:00.793639   24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1227 08:55:00.796490   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:00.796890   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:00.796916   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:00.797129   24108 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 08:55:00.801979   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:55:00.819694   24108 kubeadm.go:884] updating cluster {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 08:55:00.819800   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:55:00.819853   24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 08:55:00.841928   24108 docker.go:694] Got preloaded images: 
	I1227 08:55:00.841951   24108 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0 wasn't preloaded
	I1227 08:55:00.841997   24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 08:55:00.855548   24108 ssh_runner.go:195] Run: which lz4
	I1227 08:55:00.860486   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1227 08:55:00.860594   24108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1227 08:55:00.865387   24108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1227 08:55:00.865417   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284632523 bytes)
	I1227 08:55:01.961740   24108 docker.go:658] duration metric: took 1.101175277s to copy over tarball
	I1227 08:55:01.961831   24108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1227 08:55:03.184079   24108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.222186343s)
	I1227 08:55:03.184117   24108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1227 08:55:03.216811   24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 08:55:03.229331   24108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1227 08:55:03.250420   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:55:03.266159   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:03.414345   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:55:05.441484   24108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.027089175s)
	I1227 08:55:05.441602   24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 08:55:05.460483   24108 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 08:55:05.460508   24108 cache_images.go:86] Images are preloaded, skipping loading
	I1227 08:55:05.460517   24108 kubeadm.go:935] updating node { 192.168.39.24 8443 v1.35.0 docker true true} ...
	I1227 08:55:05.460610   24108 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 08:55:05.460667   24108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 08:55:05.512991   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:55:05.513022   24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1227 08:55:05.513043   24108 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 08:55:05.513080   24108 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899276 NodeName:multinode-899276 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 08:55:05.513228   24108 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 08:55:05.513292   24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 08:55:05.525546   24108 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 08:55:05.525616   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 08:55:05.537237   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1227 08:55:05.557993   24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 08:55:05.579343   24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1227 08:55:05.600550   24108 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1227 08:55:05.605151   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:55:05.620984   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:05.769960   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:55:05.800659   24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.24
	I1227 08:55:05.800681   24108 certs.go:195] generating shared ca certs ...
	I1227 08:55:05.800706   24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.800879   24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
	I1227 08:55:05.800934   24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
	I1227 08:55:05.800949   24108 certs.go:257] generating profile certs ...
	I1227 08:55:05.801012   24108 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key
	I1227 08:55:05.801071   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt with IP's: []
	I1227 08:55:05.940834   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt ...
	I1227 08:55:05.940874   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt: {Name:mk02178aca7f56d432d5f5e37ab494f5434cad17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.941124   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key ...
	I1227 08:55:05.941147   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key: {Name:mk6471e99270ac274eb8d161834a8e74a99ce837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.941271   24108 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d
	I1227 08:55:05.941294   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
	I1227 08:55:05.986153   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d ...
	I1227 08:55:05.986188   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d: {Name:mk802401bb34f0577b94f18188268edd10cab228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.986405   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d ...
	I1227 08:55:05.986426   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d: {Name:mk499be31979f3e860f435493b7a49f6c8a77f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.986541   24108 certs.go:382] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt
	I1227 08:55:05.986669   24108 certs.go:386] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key
	I1227 08:55:05.986770   24108 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key
	I1227 08:55:05.986801   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt with IP's: []
	I1227 08:55:06.117402   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt ...
	I1227 08:55:06.117436   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt: {Name:mkff498d36179d0686c029b1a0d2c2aa28970730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:06.117638   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key ...
	I1227 08:55:06.117659   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key: {Name:mkae01040e0a5553a361620eb1dc3658cbd20bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:06.117774   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 08:55:06.117805   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 08:55:06.117825   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 08:55:06.117845   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 08:55:06.117861   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 08:55:06.117875   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 08:55:06.117888   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 08:55:06.117906   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 08:55:06.117969   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
	W1227 08:55:06.118021   24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
	I1227 08:55:06.118034   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 08:55:06.118087   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
	I1227 08:55:06.118141   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
	I1227 08:55:06.118179   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
	I1227 08:55:06.118236   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:55:06.118294   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.118318   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.118337   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.118857   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 08:55:06.150178   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 08:55:06.179223   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 08:55:06.208476   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 08:55:06.239094   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 08:55:06.268368   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 08:55:06.297730   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 08:55:06.326802   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 08:55:06.357205   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 08:55:06.387582   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
	I1227 08:55:06.417521   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
	I1227 08:55:06.449486   24108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 08:55:06.473842   24108 ssh_runner.go:195] Run: openssl version
	I1227 08:55:06.481673   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.494727   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
	I1227 08:55:06.506605   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.511904   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.511979   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.522748   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 08:55:06.535114   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
	I1227 08:55:06.546799   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.558007   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 08:55:06.569782   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.575189   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.575271   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.582359   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 08:55:06.594977   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 08:55:06.606187   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.617464   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
	I1227 08:55:06.628478   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.633627   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.633684   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.640779   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 08:55:06.652579   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
	I1227 08:55:06.663960   24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 08:55:06.668886   24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 08:55:06.668953   24108 kubeadm.go:401] StartCluster: {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:55:06.669105   24108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 08:55:06.684838   24108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 08:55:06.696256   24108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 08:55:06.708324   24108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 08:55:06.720681   24108 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 08:55:06.720728   24108 kubeadm.go:158] found existing configuration files:
	
	I1227 08:55:06.720787   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 08:55:06.731330   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 08:55:06.731392   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 08:55:06.744324   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 08:55:06.754995   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 08:55:06.755091   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 08:55:06.767513   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 08:55:06.778490   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 08:55:06.778576   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 08:55:06.789929   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 08:55:06.800709   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 08:55:06.800794   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 08:55:06.812666   24108 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1227 08:55:07.024456   24108 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 08:55:15.975818   24108 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 08:55:15.975905   24108 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 08:55:15.976023   24108 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 08:55:15.976153   24108 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 08:55:15.976280   24108 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 08:55:15.976375   24108 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 08:55:15.977966   24108 out.go:252]   - Generating certificates and keys ...
	I1227 08:55:15.978092   24108 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 08:55:15.978154   24108 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 08:55:15.978227   24108 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 08:55:15.978279   24108 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 08:55:15.978354   24108 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 08:55:15.978437   24108 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 08:55:15.978507   24108 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 08:55:15.978652   24108 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1227 08:55:15.978708   24108 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 08:55:15.978817   24108 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1227 08:55:15.978879   24108 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 08:55:15.978934   24108 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 08:55:15.979025   24108 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 08:55:15.979124   24108 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 08:55:15.979189   24108 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 08:55:15.979284   24108 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 08:55:15.979376   24108 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 08:55:15.979463   24108 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 08:55:15.979528   24108 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 08:55:15.979667   24108 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 08:55:15.979731   24108 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 08:55:15.981818   24108 out.go:252]   - Booting up control plane ...
	I1227 08:55:15.981903   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 08:55:15.981981   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 08:55:15.982067   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 08:55:15.982163   24108 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 08:55:15.982243   24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 08:55:15.982343   24108 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 08:55:15.982416   24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 08:55:15.982468   24108 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 08:55:15.982635   24108 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 08:55:15.982810   24108 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 08:55:15.982906   24108 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001479517s
	I1227 08:55:15.983060   24108 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 08:55:15.983187   24108 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
	I1227 08:55:15.983294   24108 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 08:55:15.983366   24108 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 08:55:15.983434   24108 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508222077s
	I1227 08:55:15.983490   24108 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.795811505s
	I1227 08:55:15.983547   24108 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00280761s
	I1227 08:55:15.983634   24108 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 08:55:15.983743   24108 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 08:55:15.983806   24108 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 08:55:15.983962   24108 kubeadm.go:319] [mark-control-plane] Marking the node multinode-899276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 08:55:15.984029   24108 kubeadm.go:319] [bootstrap-token] Using token: 8gubmu.jzeht1x7riked3vp
	I1227 08:55:15.985339   24108 out.go:252]   - Configuring RBAC rules ...
	I1227 08:55:15.985468   24108 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 08:55:15.985590   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 08:55:15.985836   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 08:55:15.985963   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 08:55:15.986071   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 08:55:15.986140   24108 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 08:55:15.986233   24108 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 08:55:15.986269   24108 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 08:55:15.986315   24108 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 08:55:15.986323   24108 kubeadm.go:319] 
	I1227 08:55:15.986381   24108 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 08:55:15.986390   24108 kubeadm.go:319] 
	I1227 08:55:15.986465   24108 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 08:55:15.986474   24108 kubeadm.go:319] 
	I1227 08:55:15.986507   24108 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 08:55:15.986576   24108 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 08:55:15.986650   24108 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 08:55:15.986662   24108 kubeadm.go:319] 
	I1227 08:55:15.986752   24108 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 08:55:15.986762   24108 kubeadm.go:319] 
	I1227 08:55:15.986803   24108 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 08:55:15.986808   24108 kubeadm.go:319] 
	I1227 08:55:15.986860   24108 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 08:55:15.986924   24108 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 08:55:15.986987   24108 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 08:55:15.986995   24108 kubeadm.go:319] 
	I1227 08:55:15.987083   24108 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 08:55:15.987152   24108 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 08:55:15.987157   24108 kubeadm.go:319] 
	I1227 08:55:15.987230   24108 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
	I1227 08:55:15.987318   24108 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c \
	I1227 08:55:15.987337   24108 kubeadm.go:319] 	--control-plane 
	I1227 08:55:15.987343   24108 kubeadm.go:319] 
	I1227 08:55:15.987420   24108 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 08:55:15.987428   24108 kubeadm.go:319] 
	I1227 08:55:15.987499   24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
	I1227 08:55:15.987622   24108 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c 
	I1227 08:55:15.987640   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:55:15.987649   24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1227 08:55:15.989869   24108 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 08:55:15.990980   24108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 08:55:15.997094   24108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 08:55:15.997119   24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 08:55:16.018807   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 08:55:16.327079   24108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 08:55:16.327141   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:16.327146   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276 minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=true
	I1227 08:55:16.365159   24108 ops.go:34] apiserver oom_adj: -16
	I1227 08:55:16.465863   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:16.966866   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:17.466570   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:17.966578   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:18.466519   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:18.966943   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:19.466148   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:19.966252   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:20.466874   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:20.559551   24108 kubeadm.go:1114] duration metric: took 4.232470194s to wait for elevateKubeSystemPrivileges
	I1227 08:55:20.559594   24108 kubeadm.go:403] duration metric: took 13.890642839s to StartCluster
	I1227 08:55:20.559615   24108 settings.go:142] acquiring lock: {Name:mk44fcba3019847ba7794682dc7fa5d4c6839e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:20.559700   24108 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:55:20.560349   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/kubeconfig: {Name:mk9f130990d4b2bd0dfe5788b549d55d90047007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:20.560606   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 08:55:20.560624   24108 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 08:55:20.560698   24108 addons.go:70] Setting storage-provisioner=true in profile "multinode-899276"
	I1227 08:55:20.560599   24108 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 08:55:20.560734   24108 addons.go:70] Setting default-storageclass=true in profile "multinode-899276"
	I1227 08:55:20.560754   24108 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "multinode-899276"
	I1227 08:55:20.560889   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:20.560722   24108 addons.go:239] Setting addon storage-provisioner=true in "multinode-899276"
	I1227 08:55:20.560976   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:55:20.563353   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:20.563858   24108 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 08:55:20.563881   24108 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 08:55:20.563887   24108 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 08:55:20.563895   24108 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 08:55:20.563910   24108 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 08:55:20.563922   24108 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 08:55:20.563927   24108 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 08:55:20.564267   24108 addons.go:239] Setting addon default-storageclass=true in "multinode-899276"
	I1227 08:55:20.564296   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:55:20.566001   24108 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 08:55:20.566022   24108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 08:55:20.566660   24108 out.go:179] * Verifying Kubernetes components...
	I1227 08:55:20.566668   24108 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 08:55:20.568005   24108 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 08:55:20.568024   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:20.568027   24108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 08:55:20.568764   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.569218   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:20.569253   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.569506   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:55:20.570678   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.571119   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:20.571146   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.571271   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:55:20.721800   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 08:55:20.853268   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:55:21.022237   24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 08:55:21.022257   24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 08:55:21.456081   24108 start.go:987] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1227 08:55:21.456682   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:21.456749   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:21.457033   24108 node_ready.go:35] waiting up to 6m0s for node "multinode-899276" to be "Ready" ...
	I1227 08:55:21.828507   24108 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 08:55:21.829821   24108 addons.go:530] duration metric: took 1.269198648s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 08:55:21.962140   24108 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-899276" context rescaled to 1 replicas
	W1227 08:55:23.460520   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:25.461678   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:27.960886   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:30.459943   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:32.460468   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:34.460900   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:36.960939   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:39.460258   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	I1227 08:55:40.960160   24108 node_ready.go:49] node "multinode-899276" is "Ready"
	I1227 08:55:40.960196   24108 node_ready.go:38] duration metric: took 19.503123053s for node "multinode-899276" to be "Ready" ...
	I1227 08:55:40.960216   24108 api_server.go:52] waiting for apiserver process to appear ...
	I1227 08:55:40.960272   24108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:55:40.980487   24108 api_server.go:72] duration metric: took 20.419735752s to wait for apiserver process to appear ...
	I1227 08:55:40.980522   24108 api_server.go:88] waiting for apiserver healthz status ...
	I1227 08:55:40.980545   24108 api_server.go:299] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1227 08:55:40.985397   24108 api_server.go:325] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1227 08:55:40.986902   24108 api_server.go:141] control plane version: v1.35.0
	I1227 08:55:40.986929   24108 api_server.go:131] duration metric: took 6.398762ms to wait for apiserver health ...
	I1227 08:55:40.986938   24108 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 08:55:40.990608   24108 system_pods.go:59] 8 kube-system pods found
	I1227 08:55:40.990654   24108 system_pods.go:61] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:40.990664   24108 system_pods.go:61] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:40.990674   24108 system_pods.go:61] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:40.990682   24108 system_pods.go:61] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:40.990688   24108 system_pods.go:61] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:40.990698   24108 system_pods.go:61] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:40.990703   24108 system_pods.go:61] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:40.990715   24108 system_pods.go:61] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:40.990723   24108 system_pods.go:74] duration metric: took 3.778634ms to wait for pod list to return data ...
	I1227 08:55:40.990733   24108 default_sa.go:34] waiting for default service account to be created ...
	I1227 08:55:40.993709   24108 default_sa.go:45] found service account: "default"
	I1227 08:55:40.993729   24108 default_sa.go:55] duration metric: took 2.988456ms for default service account to be created ...
	I1227 08:55:40.993736   24108 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 08:55:40.996625   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:40.996661   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:40.996672   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:40.996683   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:40.996690   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:40.996698   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:40.996709   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:40.996716   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:40.996727   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:40.996757   24108 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 08:55:41.222991   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.223041   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:41.223072   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.223082   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.223088   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.223095   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.223101   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.223107   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.223115   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:41.595420   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.595456   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:41.595463   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.595468   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.595472   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.595476   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.595479   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.595482   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.595487   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:41.921377   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.921417   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Running
	I1227 08:55:41.921426   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.921432   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.921437   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.921443   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.921448   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.921453   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.921458   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Running
	I1227 08:55:41.921468   24108 system_pods.go:126] duration metric: took 927.725772ms to wait for k8s-apps to be running ...
	I1227 08:55:41.921482   24108 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 08:55:41.921538   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:55:41.943521   24108 system_svc.go:56] duration metric: took 22.03282ms WaitForService to wait for kubelet
	I1227 08:55:41.943547   24108 kubeadm.go:587] duration metric: took 21.382801319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 08:55:41.943563   24108 node_conditions.go:102] verifying NodePressure condition ...
	I1227 08:55:41.946923   24108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1227 08:55:41.946949   24108 node_conditions.go:123] node cpu capacity is 2
	I1227 08:55:41.946964   24108 node_conditions.go:105] duration metric: took 3.396847ms to run NodePressure ...
	I1227 08:55:41.946975   24108 start.go:242] waiting for startup goroutines ...
	I1227 08:55:41.946982   24108 start.go:247] waiting for cluster config update ...
	I1227 08:55:41.946995   24108 start.go:256] writing updated cluster config ...
	I1227 08:55:41.949394   24108 out.go:203] 
	I1227 08:55:41.951062   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:41.951143   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:41.952889   24108 out.go:179] * Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
	I1227 08:55:41.954248   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:55:41.954267   24108 cache.go:65] Caching tarball of preloaded images
	I1227 08:55:41.954391   24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 08:55:41.954406   24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 08:55:41.954483   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:41.954681   24108 start.go:360] acquireMachinesLock for multinode-899276-m02: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 08:55:41.954734   24108 start.go:364] duration metric: took 30.88µs to acquireMachinesLock for "multinode-899276-m02"
	I1227 08:55:41.954766   24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1227 08:55:41.954827   24108 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1227 08:55:41.956569   24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 08:55:41.956662   24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
	I1227 08:55:41.956692   24108 client.go:173] LocalClient.Create starting
	I1227 08:55:41.956761   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
	I1227 08:55:41.956803   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:55:41.956824   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:55:41.956873   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
	I1227 08:55:41.956892   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:55:41.956910   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:55:41.957088   24108 main.go:144] libmachine: creating domain...
	I1227 08:55:41.957098   24108 main.go:144] libmachine: creating network...
	I1227 08:55:41.958253   24108 main.go:144] libmachine: found existing default network
	I1227 08:55:41.958505   24108 main.go:144] libmachine: <network connections='1'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:55:41.958687   24108 main.go:144] libmachine: found existing mk-multinode-899276 private network, skipping creation
	I1227 08:55:41.958885   24108 main.go:144] libmachine: <network>
	  <name>mk-multinode-899276</name>
	  <uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7e:96:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	      <host mac='52:54:00:4c:5c:b4' name='multinode-899276' ip='192.168.39.24'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:55:41.959076   24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
	I1227 08:55:41.959099   24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:55:41.959107   24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:55:41.959186   24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
	I1227 08:55:42.180540   24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa...
	I1227 08:55:42.254861   24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk...
	I1227 08:55:42.254917   24108 main.go:144] libmachine: Writing magic tar header
	I1227 08:55:42.254943   24108 main.go:144] libmachine: Writing SSH key tar header
	I1227 08:55:42.255061   24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
	I1227 08:55:42.255137   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02
	I1227 08:55:42.255165   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 (perms=drwx------)
	I1227 08:55:42.255182   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
	I1227 08:55:42.255201   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
	I1227 08:55:42.255216   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:55:42.255227   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
	I1227 08:55:42.255238   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
	I1227 08:55:42.255257   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
	I1227 08:55:42.255282   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1227 08:55:42.255298   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1227 08:55:42.255318   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1227 08:55:42.255333   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1227 08:55:42.255348   24108 main.go:144] libmachine: checking permissions on dir: /home
	I1227 08:55:42.255359   24108 main.go:144] libmachine: skipping /home - not owner
	I1227 08:55:42.255363   24108 main.go:144] libmachine: defining domain...
	I1227 08:55:42.256580   24108 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>multinode-899276-m02</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:55:42.265000   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:b3:04:b6 in network default
	I1227 08:55:42.265650   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:42.265669   24108 main.go:144] libmachine: starting domain...
	I1227 08:55:42.265674   24108 main.go:144] libmachine: ensuring networks are active...
	I1227 08:55:42.266690   24108 main.go:144] libmachine: Ensuring network default is active
	I1227 08:55:42.267245   24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
	I1227 08:55:42.267833   24108 main.go:144] libmachine: getting domain XML...
	I1227 08:55:42.269145   24108 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-899276-m02</name>
	  <uuid>08f0927e-00b1-40b5-b768-ac07d0776e28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:9b:0b:64'/>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b3:04:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:55:43.575420   24108 main.go:144] libmachine: waiting for domain to start...
	I1227 08:55:43.576915   24108 main.go:144] libmachine: domain is now running
	I1227 08:55:43.576935   24108 main.go:144] libmachine: waiting for IP...
	I1227 08:55:43.577720   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:43.578257   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:43.578273   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:43.578564   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:43.833127   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:43.833729   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:43.833744   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:43.834083   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.161636   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.162394   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.162413   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.162749   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.477602   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.478263   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.478282   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.478685   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.857427   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.858004   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.858026   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.858397   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:45.619396   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:45.619938   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:45.619953   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:45.620268   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:46.214206   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:46.214738   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:46.214760   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:46.215107   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:47.368589   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:47.369148   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:47.369169   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:47.369473   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:48.790105   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:48.790775   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:48.790792   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:48.791137   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:50.057612   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:50.058205   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:50.058230   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:50.058563   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:51.571769   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:51.572501   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:51.572522   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:51.572969   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:54.369906   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:54.370596   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:54.370610   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:54.370961   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:57.241023   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.241672   24108 main.go:144] libmachine: domain multinode-899276-m02 has current primary IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.241689   24108 main.go:144] libmachine: found domain IP: 192.168.39.160
	I1227 08:55:57.241696   24108 main.go:144] libmachine: reserving static IP address...
	I1227 08:55:57.242083   24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276-m02", mac: "52:54:00:9b:0b:64", ip: "192.168.39.160"} in network mk-multinode-899276
	I1227 08:55:57.450637   24108 main.go:144] libmachine: reserved static IP address 192.168.39.160 for domain multinode-899276-m02
	I1227 08:55:57.450661   24108 main.go:144] libmachine: waiting for SSH...
	I1227 08:55:57.450668   24108 main.go:144] libmachine: Getting to WaitForSSH function...
	I1227 08:55:57.453744   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.454265   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.454291   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.454489   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.454732   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.454744   24108 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1227 08:55:57.569604   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:55:57.570099   24108 main.go:144] libmachine: domain creation complete
	I1227 08:55:57.571770   24108 machine.go:94] provisionDockerMachine start ...
	I1227 08:55:57.574152   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.574608   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.574633   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.574862   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.575132   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.575147   24108 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 08:55:57.686687   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1227 08:55:57.686742   24108 buildroot.go:166] provisioning hostname "multinode-899276-m02"
	I1227 08:55:57.689982   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.690439   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.690482   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.690712   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.690987   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.691006   24108 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-899276-m02 && echo "multinode-899276-m02" | sudo tee /etc/hostname
	I1227 08:55:57.825642   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276-m02
	
	I1227 08:55:57.828982   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.829434   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.829471   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.829664   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.829868   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.829883   24108 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899276-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899276-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 08:55:57.955353   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:55:57.955387   24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
	I1227 08:55:57.955404   24108 buildroot.go:174] setting up certificates
	I1227 08:55:57.955412   24108 provision.go:84] configureAuth start
	I1227 08:55:57.958329   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.958721   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.958743   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961212   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961604   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.961634   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961769   24108 provision.go:143] copyHostCerts
	I1227 08:55:57.961801   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:55:57.961840   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
	I1227 08:55:57.961853   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:55:57.961943   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
	I1227 08:55:57.962064   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:55:57.962093   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
	I1227 08:55:57.962101   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:55:57.962149   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
	I1227 08:55:57.962220   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:55:57.962245   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
	I1227 08:55:57.962253   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:55:57.962290   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
	I1227 08:55:57.962357   24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276-m02 san=[127.0.0.1 192.168.39.160 localhost minikube multinode-899276-m02]
	I1227 08:55:58.062355   24108 provision.go:177] copyRemoteCerts
	I1227 08:55:58.062418   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 08:55:58.065702   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.066127   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.066154   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.066319   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:58.156852   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 08:55:58.156925   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 08:55:58.186973   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 08:55:58.187035   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1227 08:55:58.216314   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 08:55:58.216378   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 08:55:58.250146   24108 provision.go:87] duration metric: took 294.721391ms to configureAuth
	I1227 08:55:58.250177   24108 buildroot.go:189] setting minikube options for container-runtime
	I1227 08:55:58.250357   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:58.252989   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.253461   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.253487   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.253690   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.253921   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.253934   24108 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 08:55:58.373697   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 08:55:58.373723   24108 buildroot.go:70] root file system type: tmpfs
	I1227 08:55:58.373873   24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 08:55:58.376713   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.377114   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.377139   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.377329   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.377512   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.377555   24108 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.39.24"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 08:55:58.508330   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.39.24
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 08:55:58.511413   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.511851   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.511879   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.512069   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.512332   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.512351   24108 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 08:55:59.431853   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1227 08:55:59.431877   24108 machine.go:97] duration metric: took 1.86008098s to provisionDockerMachine
	I1227 08:55:59.431888   24108 client.go:176] duration metric: took 17.475186189s to LocalClient.Create
	I1227 08:55:59.431902   24108 start.go:167] duration metric: took 17.47524121s to libmachine.API.Create "multinode-899276"
	I1227 08:55:59.431909   24108 start.go:293] postStartSetup for "multinode-899276-m02" (driver="kvm2")
	I1227 08:55:59.431918   24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 08:55:59.431968   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 08:55:59.434620   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.435132   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.435167   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.435355   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:59.525674   24108 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 08:55:59.530511   24108 info.go:137] Remote host: Buildroot 2025.02
	I1227 08:55:59.530547   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
	I1227 08:55:59.530632   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
	I1227 08:55:59.530706   24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
	I1227 08:55:59.530716   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
	I1227 08:55:59.530821   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 08:55:59.542821   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:55:59.573575   24108 start.go:296] duration metric: took 141.651568ms for postStartSetup
	I1227 08:55:59.576745   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.577190   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.577225   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.577486   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:59.577738   24108 start.go:128] duration metric: took 17.622900484s to createHost
	I1227 08:55:59.579881   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.580246   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.580267   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.580524   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:59.580736   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:59.580748   24108 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 08:55:59.695810   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825759.656998713
	
	I1227 08:55:59.695838   24108 fix.go:216] guest clock: 1766825759.656998713
	I1227 08:55:59.695847   24108 fix.go:229] Guest: 2025-12-27 08:55:59.656998713 +0000 UTC Remote: 2025-12-27 08:55:59.577753428 +0000 UTC m=+82.275426938 (delta=79.245285ms)
	I1227 08:55:59.695869   24108 fix.go:200] guest clock delta is within tolerance: 79.245285ms
	I1227 08:55:59.695877   24108 start.go:83] releasing machines lock for "multinode-899276-m02", held for 17.741133225s
	I1227 08:55:59.698823   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.699365   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.699403   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.701968   24108 out.go:179] * Found network options:
	I1227 08:55:59.703396   24108 out.go:179]   - NO_PROXY=192.168.39.24
	W1227 08:55:59.704647   24108 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 08:55:59.705042   24108 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 08:55:59.705131   24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1227 08:55:59.705131   24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 08:55:59.708339   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708387   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708760   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.708817   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.708844   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708889   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.709024   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:59.709228   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	W1227 08:55:59.793520   24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 08:55:59.793609   24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 08:55:59.816238   24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 08:55:59.816269   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:55:59.816301   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:55:59.816397   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:55:59.839936   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 08:55:59.852570   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 08:55:59.865005   24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 08:55:59.865103   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 08:55:59.877853   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:55:59.890799   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 08:55:59.903794   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:55:59.916281   24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 08:55:59.929816   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 08:55:59.942187   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 08:55:59.955245   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 08:55:59.968552   24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 08:55:59.979484   24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 08:55:59.979563   24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 08:55:59.993561   24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 08:56:00.006240   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:00.152118   24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 08:56:00.190124   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:56:00.190172   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:56:00.190230   24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 08:56:00.211952   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:56:00.237208   24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 08:56:00.259010   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:56:00.275879   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:56:00.293605   24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 08:56:00.326414   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:56:00.342364   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:56:00.365931   24108 ssh_runner.go:195] Run: which cri-dockerd
	I1227 08:56:00.370257   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 08:56:00.382716   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 08:56:00.404739   24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 08:56:00.548335   24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 08:56:00.689510   24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 08:56:00.689570   24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 08:56:00.729510   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:56:00.746884   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:00.890844   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:56:01.355108   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 08:56:01.370599   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 08:56:01.386540   24108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1227 08:56:01.404096   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:56:01.419794   24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 08:56:01.561520   24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 08:56:01.708164   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:01.863090   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 08:56:01.899043   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 08:56:01.915288   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:02.062800   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 08:56:02.174498   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:56:02.198066   24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 08:56:02.198172   24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 08:56:02.204239   24108 start.go:574] Will wait 60s for crictl version
	I1227 08:56:02.204318   24108 ssh_runner.go:195] Run: which crictl
	I1227 08:56:02.208415   24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 08:56:02.242462   24108 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1227 08:56:02.242547   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:56:02.272210   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:56:02.305864   24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1227 08:56:02.307155   24108 out.go:179]   - env NO_PROXY=192.168.39.24
	I1227 08:56:02.310958   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:56:02.311334   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:56:02.311356   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:56:02.311519   24108 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 08:56:02.316034   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:56:02.330706   24108 mustload.go:66] Loading cluster: multinode-899276
	I1227 08:56:02.330927   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:56:02.332363   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:56:02.332574   24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.160
	I1227 08:56:02.332593   24108 certs.go:195] generating shared ca certs ...
	I1227 08:56:02.332615   24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:56:02.332749   24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
	I1227 08:56:02.332808   24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
	I1227 08:56:02.332826   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 08:56:02.332851   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 08:56:02.332871   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 08:56:02.332887   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 08:56:02.332965   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
	W1227 08:56:02.333010   24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
	I1227 08:56:02.333027   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 08:56:02.333079   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
	I1227 08:56:02.333119   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
	I1227 08:56:02.333153   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
	I1227 08:56:02.333216   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:56:02.333264   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.333285   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.333302   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.333328   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 08:56:02.365645   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 08:56:02.395629   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 08:56:02.425519   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 08:56:02.455554   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 08:56:02.486238   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
	I1227 08:56:02.515842   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
	I1227 08:56:02.545758   24108 ssh_runner.go:195] Run: openssl version
	I1227 08:56:02.552395   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.564618   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
	I1227 08:56:02.577235   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.582685   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.582759   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.590482   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 08:56:02.601896   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
	I1227 08:56:02.613606   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.625518   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 08:56:02.637508   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.642823   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.642901   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.650764   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 08:56:02.663547   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 08:56:02.675853   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.688458   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
	I1227 08:56:02.701658   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.706958   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.707033   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.714242   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 08:56:02.726789   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
	I1227 08:56:02.740816   24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 08:56:02.745870   24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 08:56:02.745924   24108 kubeadm.go:935] updating node {m02 192.168.39.160 8443 v1.35.0 docker false true} ...
	I1227 08:56:02.746010   24108 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 08:56:02.746115   24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 08:56:02.758129   24108 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 08:56:02.758244   24108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 08:56:02.770426   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1227 08:56:02.770451   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1227 08:56:02.770474   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:56:02.770479   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm -> /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 08:56:02.770428   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1227 08:56:02.770532   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 08:56:02.770547   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl -> /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 08:56:02.770638   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 08:56:02.775599   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 08:56:02.775636   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1227 08:56:02.800423   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet -> /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 08:56:02.800448   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 08:56:02.800474   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1227 08:56:02.800530   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 08:56:02.847555   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 08:56:02.847596   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1227 08:56:03.589571   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 08:56:03.603768   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1227 08:56:03.631212   24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 08:56:03.655890   24108 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1227 08:56:03.660915   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:56:03.680065   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:03.823402   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:56:03.862307   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:56:03.862561   24108 start.go:318] joinCluster: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:56:03.862676   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1227 08:56:03.865388   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:56:03.865858   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:56:03.865900   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:56:03.866073   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:56:04.026904   24108 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1227 08:56:04.027011   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9k0kod.6geqtmlyqvlg3686 --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-899276-m02"
	I1227 08:56:04.959833   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1227 08:56:05.276831   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false
	I1227 08:56:05.365119   24108 start.go:320] duration metric: took 1.502556165s to joinCluster
	I1227 08:56:05.367341   24108 out.go:203] 
	W1227 08:56:05.368707   24108 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-899276-m02" not found
	
	X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-899276-m02" not found
	
	W1227 08:56:05.368724   24108 out.go:285] * 
	* 
	W1227 08:56:05.369029   24108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 08:56:05.370349   24108 out.go:203] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-899276 -n multinode-899276
helpers_test.go:253: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p multinode-899276 logs -n 25: (1.029886594s)
helpers_test.go:261: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                    ARGS                                                                                                     │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p json-output-error-635110 --memory=3072 --output=json --wait=true --driver=fail                                                                                                                           │ json-output-error-635110 │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │                     │
	│ delete  │ -p json-output-error-635110                                                                                                                                                                                 │ json-output-error-635110 │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ 27 Dec 25 08:52 UTC │
	│ start   │ -p first-739389 --driver=kvm2                                                                                                                                                                               │ first-739389             │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ 27 Dec 25 08:52 UTC │
	│ start   │ -p second-741777 --driver=kvm2                                                                                                                                                                              │ second-741777            │ jenkins │ v1.37.0 │ 27 Dec 25 08:52 UTC │ 27 Dec 25 08:53 UTC │
	│ delete  │ -p second-741777                                                                                                                                                                                            │ second-741777            │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
	│ delete  │ -p first-739389                                                                                                                                                                                             │ first-739389             │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
	│ start   │ -p mount-start-1-817954 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 │ mount-start-1-817954     │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
	│ mount   │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-1-817954 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46464 --type 9p --uid 0                                 │ mount-start-1-817954     │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │                     │
	│ ssh     │ mount-start-1-817954 ssh -- ls /minikube-host                                                                                                                                                               │ mount-start-1-817954     │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
	│ ssh     │ mount-start-1-817954 ssh -- findmnt --json /minikube-host                                                                                                                                                   │ mount-start-1-817954     │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:53 UTC │
	│ start   │ -p mount-start-2-834751 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:53 UTC │ 27 Dec 25 08:54 UTC │
	│ mount   │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-2-834751 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46465 --type 9p --uid 0                                 │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │                     │
	│ ssh     │ mount-start-2-834751 ssh -- ls /minikube-host                                                                                                                                                               │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ ssh     │ mount-start-2-834751 ssh -- findmnt --json /minikube-host                                                                                                                                                   │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ delete  │ -p mount-start-1-817954 --alsologtostderr -v=5                                                                                                                                                              │ mount-start-1-817954     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ ssh     │ mount-start-2-834751 ssh -- ls /minikube-host                                                                                                                                                               │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ ssh     │ mount-start-2-834751 ssh -- findmnt --json /minikube-host                                                                                                                                                   │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ stop    │ -p mount-start-2-834751                                                                                                                                                                                     │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ start   │ -p mount-start-2-834751                                                                                                                                                                                     │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ mount   │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-2-834751 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46465 --type 9p --uid 0                                 │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │                     │
	│ ssh     │ mount-start-2-834751 ssh -- ls /minikube-host                                                                                                                                                               │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ ssh     │ mount-start-2-834751 ssh -- findmnt --json /minikube-host                                                                                                                                                   │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ delete  │ -p mount-start-2-834751                                                                                                                                                                                     │ mount-start-2-834751     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ delete  │ -p mount-start-1-817954                                                                                                                                                                                     │ mount-start-1-817954     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ start   │ -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2                                                                                                                │ multinode-899276         │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 08:54:37
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 08:54:37.348894   24108 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:54:37.349196   24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:54:37.349207   24108 out.go:374] Setting ErrFile to fd 2...
	I1227 08:54:37.349214   24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:54:37.349401   24108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:54:37.349901   24108 out.go:368] Setting JSON to false
	I1227 08:54:37.350702   24108 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2227,"bootTime":1766823450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:54:37.350761   24108 start.go:143] virtualization: kvm guest
	I1227 08:54:37.352914   24108 out.go:179] * [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 08:54:37.354122   24108 notify.go:221] Checking for updates...
	I1227 08:54:37.354140   24108 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:54:37.355599   24108 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:54:37.356985   24108 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:54:37.358228   24108 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.359373   24108 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 08:54:37.360648   24108 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:54:37.362069   24108 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:54:37.398292   24108 out.go:179] * Using the kvm2 driver based on user configuration
	I1227 08:54:37.399595   24108 start.go:309] selected driver: kvm2
	I1227 08:54:37.399614   24108 start.go:928] validating driver "kvm2" against <nil>
	I1227 08:54:37.399634   24108 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:54:37.400332   24108 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 08:54:37.400590   24108 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 08:54:37.400626   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:54:37.400682   24108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1227 08:54:37.400692   24108 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 08:54:37.400744   24108 start.go:353] cluster config:
	{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:54:37.400897   24108 iso.go:125] acquiring lock: {Name:mkf3af0a60e6ccee2eeb813de50903ed5d7e8922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 08:54:37.402631   24108 out.go:179] * Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
	I1227 08:54:37.403816   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:54:37.403844   24108 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 08:54:37.403854   24108 cache.go:65] Caching tarball of preloaded images
	I1227 08:54:37.403951   24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 08:54:37.403967   24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 08:54:37.404346   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:54:37.404374   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json: {Name:mk5e07ed738ae868a23976588c175a8cb2b30a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:54:37.404563   24108 start.go:360] acquireMachinesLock for multinode-899276: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 08:54:37.404598   24108 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "multinode-899276"
	I1227 08:54:37.404622   24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 08:54:37.404675   24108 start.go:125] createHost starting for "" (driver="kvm2")
	I1227 08:54:37.407102   24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 08:54:37.407274   24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
	I1227 08:54:37.407306   24108 client.go:173] LocalClient.Create starting
	I1227 08:54:37.407365   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
	I1227 08:54:37.407409   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:54:37.407425   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:54:37.407478   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
	I1227 08:54:37.407496   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:54:37.407507   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:54:37.407806   24108 main.go:144] libmachine: creating domain...
	I1227 08:54:37.407817   24108 main.go:144] libmachine: creating network...
	I1227 08:54:37.409512   24108 main.go:144] libmachine: found existing default network
	I1227 08:54:37.409702   24108 main.go:144] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.410292   24108 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001caea70}
	I1227 08:54:37.410380   24108 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-multinode-899276</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.416200   24108 main.go:144] libmachine: creating private network mk-multinode-899276 192.168.39.0/24...
	I1227 08:54:37.484690   24108 main.go:144] libmachine: private network mk-multinode-899276 192.168.39.0/24 created
	I1227 08:54:37.484994   24108 main.go:144] libmachine: <network>
	  <name>mk-multinode-899276</name>
	  <uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7e:96:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.485088   24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
	I1227 08:54:37.485112   24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:54:37.485123   24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.485174   24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
	I1227 08:54:37.708878   24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa...
	I1227 08:54:37.789981   24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk...
	I1227 08:54:37.790024   24108 main.go:144] libmachine: Writing magic tar header
	I1227 08:54:37.790040   24108 main.go:144] libmachine: Writing SSH key tar header
	I1227 08:54:37.790127   24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
	I1227 08:54:37.790183   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276
	I1227 08:54:37.790204   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 (perms=drwx------)
	I1227 08:54:37.790215   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
	I1227 08:54:37.790225   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
	I1227 08:54:37.790238   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.790249   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
	I1227 08:54:37.790257   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
	I1227 08:54:37.790265   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
	I1227 08:54:37.790275   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1227 08:54:37.790287   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1227 08:54:37.790303   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1227 08:54:37.790313   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1227 08:54:37.790321   24108 main.go:144] libmachine: checking permissions on dir: /home
	I1227 08:54:37.790330   24108 main.go:144] libmachine: skipping /home - not owner
	I1227 08:54:37.790334   24108 main.go:144] libmachine: defining domain...
	I1227 08:54:37.792061   24108 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>multinode-899276</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:54:37.797217   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:e2:49:84 in network default
	I1227 08:54:37.797913   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:37.797931   24108 main.go:144] libmachine: starting domain...
	I1227 08:54:37.797936   24108 main.go:144] libmachine: ensuring networks are active...
	I1227 08:54:37.798746   24108 main.go:144] libmachine: Ensuring network default is active
	I1227 08:54:37.799132   24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
	I1227 08:54:37.799776   24108 main.go:144] libmachine: getting domain XML...
	I1227 08:54:37.800794   24108 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-899276</name>
	  <uuid>6d370929-9382-4953-8ba6-4fb6eca3e648</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4c:5c:b4'/>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e2:49:84'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:54:39.079279   24108 main.go:144] libmachine: waiting for domain to start...
	I1227 08:54:39.080610   24108 main.go:144] libmachine: domain is now running
	I1227 08:54:39.080624   24108 main.go:144] libmachine: waiting for IP...
	I1227 08:54:39.081451   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.082023   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.082037   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.082336   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.082377   24108 retry.go:84] will retry after 200ms: waiting for domain to come up
	I1227 08:54:39.326020   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.326723   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.326741   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.327098   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.575768   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.576511   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.576534   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.576883   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.876331   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.877091   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.877107   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.877413   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:40.370368   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:40.371069   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:40.371086   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:40.371431   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:40.865483   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:40.866211   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:40.866236   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:40.866603   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:41.484623   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:41.485260   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:41.485279   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:41.485638   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:42.393849   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:42.394445   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:42.394463   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:42.394914   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:43.319225   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:43.320003   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:43.320020   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:43.320334   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:44.724122   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:44.724874   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:44.724891   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:44.725237   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:46.322345   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:46.323107   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:46.323130   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:46.323457   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:48.157422   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:48.158091   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:48.158110   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:48.158455   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:51.501875   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:51.502515   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:51.502530   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:51.502791   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:51.502830   24108 retry.go:84] will retry after 4.3s: waiting for domain to come up
	I1227 08:54:55.837835   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:55.838577   24108 main.go:144] libmachine: domain multinode-899276 has current primary IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:55.838596   24108 main.go:144] libmachine: found domain IP: 192.168.39.24
	I1227 08:54:55.838605   24108 main.go:144] libmachine: reserving static IP address...
	I1227 08:54:55.839242   24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276", mac: "52:54:00:4c:5c:b4", ip: "192.168.39.24"} in network mk-multinode-899276
	I1227 08:54:56.025597   24108 main.go:144] libmachine: reserved static IP address 192.168.39.24 for domain multinode-899276
	I1227 08:54:56.025623   24108 main.go:144] libmachine: waiting for SSH...
	I1227 08:54:56.025631   24108 main.go:144] libmachine: Getting to WaitForSSH function...
	I1227 08:54:56.028518   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.029028   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.029077   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.029273   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.029482   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.029494   24108 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1227 08:54:56.143804   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:54:56.144248   24108 main.go:144] libmachine: domain creation complete
	I1227 08:54:56.146013   24108 machine.go:94] provisionDockerMachine start ...
	I1227 08:54:56.148712   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.149157   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.149206   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.149383   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.149565   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.149574   24108 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 08:54:56.263810   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1227 08:54:56.263841   24108 buildroot.go:166] provisioning hostname "multinode-899276"
	I1227 08:54:56.266910   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.267410   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.267435   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.267640   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.267847   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.267858   24108 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-899276 && echo "multinode-899276" | sudo tee /etc/hostname
	I1227 08:54:56.401325   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276
	
	I1227 08:54:56.404664   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.405235   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.405263   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.405433   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.405644   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.405659   24108 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 08:54:56.543193   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:54:56.543230   24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
	I1227 08:54:56.543264   24108 buildroot.go:174] setting up certificates
	I1227 08:54:56.543282   24108 provision.go:84] configureAuth start
	I1227 08:54:56.546171   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.546588   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.546612   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.548760   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.549114   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.549136   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.549243   24108 provision.go:143] copyHostCerts
	I1227 08:54:56.549266   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:54:56.549290   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
	I1227 08:54:56.549298   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:54:56.549370   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
	I1227 08:54:56.549490   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:54:56.549516   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
	I1227 08:54:56.549522   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:54:56.549548   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
	I1227 08:54:56.549593   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:54:56.549609   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
	I1227 08:54:56.549615   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:54:56.549634   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
	I1227 08:54:56.549680   24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-899276]
	I1227 08:54:56.564952   24108 provision.go:177] copyRemoteCerts
	I1227 08:54:56.565003   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 08:54:56.567240   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.567643   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.567677   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.567850   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:56.656198   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 08:54:56.656292   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 08:54:56.685216   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 08:54:56.685304   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1227 08:54:56.714733   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 08:54:56.714819   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 08:54:56.743305   24108 provision.go:87] duration metric: took 199.989326ms to configureAuth
	I1227 08:54:56.743338   24108 buildroot.go:189] setting minikube options for container-runtime
	I1227 08:54:56.743528   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:54:56.746235   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.746587   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.746606   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.746782   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.747027   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.747039   24108 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 08:54:56.861225   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 08:54:56.861255   24108 buildroot.go:70] root file system type: tmpfs
	I1227 08:54:56.861417   24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 08:54:56.864305   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.864731   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.864767   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.864925   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.865130   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.865170   24108 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 08:54:56.996399   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 08:54:56.999444   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.999882   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.999912   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.000156   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:57.000379   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:57.000396   24108 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 08:54:57.924795   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1227 08:54:57.924823   24108 machine.go:97] duration metric: took 1.778786884s to provisionDockerMachine
	I1227 08:54:57.924839   24108 client.go:176] duration metric: took 20.517522558s to LocalClient.Create
	I1227 08:54:57.924853   24108 start.go:167] duration metric: took 20.517578026s to libmachine.API.Create "multinode-899276"
	I1227 08:54:57.924862   24108 start.go:293] postStartSetup for "multinode-899276" (driver="kvm2")
	I1227 08:54:57.924874   24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 08:54:57.924962   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 08:54:57.927733   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.928188   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:57.928219   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.928364   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.017094   24108 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 08:54:58.021892   24108 info.go:137] Remote host: Buildroot 2025.02
	I1227 08:54:58.021927   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
	I1227 08:54:58.022001   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
	I1227 08:54:58.022108   24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
	I1227 08:54:58.022115   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
	I1227 08:54:58.022194   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 08:54:58.035018   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:54:58.064746   24108 start.go:296] duration metric: took 139.872084ms for postStartSetup
	I1227 08:54:58.067860   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.068279   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.068306   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.068579   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:54:58.068756   24108 start.go:128] duration metric: took 20.664071028s to createHost
	I1227 08:54:58.071566   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.072015   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.072040   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.072244   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:58.072473   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:58.072488   24108 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 08:54:58.187322   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825698.156416973
	
	I1227 08:54:58.187344   24108 fix.go:216] guest clock: 1766825698.156416973
	I1227 08:54:58.187351   24108 fix.go:229] Guest: 2025-12-27 08:54:58.156416973 +0000 UTC Remote: 2025-12-27 08:54:58.068766977 +0000 UTC m=+20.766440443 (delta=87.649996ms)
	I1227 08:54:58.187367   24108 fix.go:200] guest clock delta is within tolerance: 87.649996ms
	I1227 08:54:58.187371   24108 start.go:83] releasing machines lock for "multinode-899276", held for 20.782762567s
	I1227 08:54:58.189878   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.190311   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.190336   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.190848   24108 ssh_runner.go:195] Run: cat /version.json
	I1227 08:54:58.190934   24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 08:54:58.193909   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.193920   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194367   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.194393   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194412   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.194445   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194571   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.194749   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.303202   24108 ssh_runner.go:195] Run: systemctl --version
	I1227 08:54:58.309380   24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 08:54:58.315530   24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 08:54:58.315591   24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 08:54:58.335551   24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 08:54:58.335587   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:54:58.335615   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:54:58.335736   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:54:58.357443   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 08:54:58.369407   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 08:54:58.384702   24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 08:54:58.384807   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 08:54:58.399640   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:54:58.412464   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 08:54:58.424691   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:54:58.437707   24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 08:54:58.450402   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 08:54:58.462916   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 08:54:58.475650   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 08:54:58.493530   24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 08:54:58.504139   24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 08:54:58.504192   24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 08:54:58.516423   24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 08:54:58.528272   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:54:58.673716   24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 08:54:58.720867   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:54:58.720909   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:54:58.720972   24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 08:54:58.744526   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:54:58.764985   24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 08:54:58.785879   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:54:58.803205   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:54:58.821885   24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 08:54:58.856773   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:54:58.873676   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:54:58.896773   24108 ssh_runner.go:195] Run: which cri-dockerd
	I1227 08:54:58.901095   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 08:54:58.912977   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 08:54:58.935679   24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 08:54:59.087073   24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 08:54:59.235233   24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 08:54:59.235368   24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 08:54:59.257291   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:54:59.273342   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:54:59.413736   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:54:59.868087   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 08:54:59.883321   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 08:54:59.898581   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:54:59.913286   24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 08:55:00.062974   24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 08:55:00.214186   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:00.363957   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 08:55:00.400471   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 08:55:00.416741   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:00.560590   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 08:55:00.668182   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:55:00.687244   24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 08:55:00.687326   24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 08:55:00.693883   24108 start.go:574] Will wait 60s for crictl version
	I1227 08:55:00.693968   24108 ssh_runner.go:195] Run: which crictl
	I1227 08:55:00.698083   24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 08:55:00.732884   24108 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1227 08:55:00.732961   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:55:00.764467   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:55:00.793639   24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1227 08:55:00.796490   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:00.796890   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:00.796916   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:00.797129   24108 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 08:55:00.801979   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:55:00.819694   24108 kubeadm.go:884] updating cluster {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 08:55:00.819800   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:55:00.819853   24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 08:55:00.841928   24108 docker.go:694] Got preloaded images: 
	I1227 08:55:00.841951   24108 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0 wasn't preloaded
	I1227 08:55:00.841997   24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 08:55:00.855548   24108 ssh_runner.go:195] Run: which lz4
	I1227 08:55:00.860486   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1227 08:55:00.860594   24108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1227 08:55:00.865387   24108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1227 08:55:00.865417   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284632523 bytes)
	I1227 08:55:01.961740   24108 docker.go:658] duration metric: took 1.101175277s to copy over tarball
	I1227 08:55:01.961831   24108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1227 08:55:03.184079   24108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.222186343s)
	I1227 08:55:03.184117   24108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1227 08:55:03.216811   24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 08:55:03.229331   24108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1227 08:55:03.250420   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:55:03.266159   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:03.414345   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:55:05.441484   24108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.027089175s)
	I1227 08:55:05.441602   24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 08:55:05.460483   24108 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 08:55:05.460508   24108 cache_images.go:86] Images are preloaded, skipping loading
	I1227 08:55:05.460517   24108 kubeadm.go:935] updating node { 192.168.39.24 8443 v1.35.0 docker true true} ...
	I1227 08:55:05.460610   24108 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 08:55:05.460667   24108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 08:55:05.512991   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:55:05.513022   24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1227 08:55:05.513043   24108 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 08:55:05.513080   24108 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899276 NodeName:multinode-899276 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 08:55:05.513228   24108 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 08:55:05.513292   24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 08:55:05.525546   24108 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 08:55:05.525616   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 08:55:05.537237   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1227 08:55:05.557993   24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 08:55:05.579343   24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1227 08:55:05.600550   24108 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1227 08:55:05.605151   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:55:05.620984   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:05.769960   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:55:05.800659   24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.24
	I1227 08:55:05.800681   24108 certs.go:195] generating shared ca certs ...
	I1227 08:55:05.800706   24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.800879   24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
	I1227 08:55:05.800934   24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
	I1227 08:55:05.800949   24108 certs.go:257] generating profile certs ...
	I1227 08:55:05.801012   24108 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key
	I1227 08:55:05.801071   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt with IP's: []
	I1227 08:55:05.940834   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt ...
	I1227 08:55:05.940874   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt: {Name:mk02178aca7f56d432d5f5e37ab494f5434cad17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.941124   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key ...
	I1227 08:55:05.941147   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key: {Name:mk6471e99270ac274eb8d161834a8e74a99ce837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.941271   24108 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d
	I1227 08:55:05.941294   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
	I1227 08:55:05.986153   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d ...
	I1227 08:55:05.986188   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d: {Name:mk802401bb34f0577b94f18188268edd10cab228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.986405   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d ...
	I1227 08:55:05.986426   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d: {Name:mk499be31979f3e860f435493b7a49f6c8a77f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.986541   24108 certs.go:382] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt
	I1227 08:55:05.986669   24108 certs.go:386] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key
	I1227 08:55:05.986770   24108 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key
	I1227 08:55:05.986801   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt with IP's: []
	I1227 08:55:06.117402   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt ...
	I1227 08:55:06.117436   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt: {Name:mkff498d36179d0686c029b1a0d2c2aa28970730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:06.117638   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key ...
	I1227 08:55:06.117659   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key: {Name:mkae01040e0a5553a361620eb1dc3658cbd20bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:06.117774   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 08:55:06.117805   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 08:55:06.117825   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 08:55:06.117845   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 08:55:06.117861   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 08:55:06.117875   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 08:55:06.117888   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 08:55:06.117906   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 08:55:06.117969   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
	W1227 08:55:06.118021   24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
	I1227 08:55:06.118034   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 08:55:06.118087   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
	I1227 08:55:06.118141   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
	I1227 08:55:06.118179   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
	I1227 08:55:06.118236   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:55:06.118294   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.118318   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.118337   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.118857   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 08:55:06.150178   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 08:55:06.179223   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 08:55:06.208476   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 08:55:06.239094   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 08:55:06.268368   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 08:55:06.297730   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 08:55:06.326802   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 08:55:06.357205   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 08:55:06.387582   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
	I1227 08:55:06.417521   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
	I1227 08:55:06.449486   24108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 08:55:06.473842   24108 ssh_runner.go:195] Run: openssl version
	I1227 08:55:06.481673   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.494727   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
	I1227 08:55:06.506605   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.511904   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.511979   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.522748   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 08:55:06.535114   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
	I1227 08:55:06.546799   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.558007   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 08:55:06.569782   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.575189   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.575271   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.582359   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 08:55:06.594977   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 08:55:06.606187   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.617464   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
	I1227 08:55:06.628478   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.633627   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.633684   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.640779   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 08:55:06.652579   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
	I1227 08:55:06.663960   24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 08:55:06.668886   24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 08:55:06.668953   24108 kubeadm.go:401] StartCluster: {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:55:06.669105   24108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 08:55:06.684838   24108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 08:55:06.696256   24108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 08:55:06.708324   24108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 08:55:06.720681   24108 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 08:55:06.720728   24108 kubeadm.go:158] found existing configuration files:
	
	I1227 08:55:06.720787   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 08:55:06.731330   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 08:55:06.731392   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 08:55:06.744324   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 08:55:06.754995   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 08:55:06.755091   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 08:55:06.767513   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 08:55:06.778490   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 08:55:06.778576   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 08:55:06.789929   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 08:55:06.800709   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 08:55:06.800794   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 08:55:06.812666   24108 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1227 08:55:07.024456   24108 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 08:55:15.975818   24108 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 08:55:15.975905   24108 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 08:55:15.976023   24108 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 08:55:15.976153   24108 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 08:55:15.976280   24108 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 08:55:15.976375   24108 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 08:55:15.977966   24108 out.go:252]   - Generating certificates and keys ...
	I1227 08:55:15.978092   24108 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 08:55:15.978154   24108 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 08:55:15.978227   24108 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 08:55:15.978279   24108 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 08:55:15.978354   24108 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 08:55:15.978437   24108 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 08:55:15.978507   24108 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 08:55:15.978652   24108 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1227 08:55:15.978708   24108 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 08:55:15.978817   24108 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1227 08:55:15.978879   24108 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 08:55:15.978934   24108 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 08:55:15.979025   24108 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 08:55:15.979124   24108 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 08:55:15.979189   24108 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 08:55:15.979284   24108 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 08:55:15.979376   24108 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 08:55:15.979463   24108 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 08:55:15.979528   24108 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 08:55:15.979667   24108 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 08:55:15.979731   24108 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 08:55:15.981818   24108 out.go:252]   - Booting up control plane ...
	I1227 08:55:15.981903   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 08:55:15.981981   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 08:55:15.982067   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 08:55:15.982163   24108 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 08:55:15.982243   24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 08:55:15.982343   24108 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 08:55:15.982416   24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 08:55:15.982468   24108 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 08:55:15.982635   24108 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 08:55:15.982810   24108 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 08:55:15.982906   24108 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001479517s
	I1227 08:55:15.983060   24108 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 08:55:15.983187   24108 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
	I1227 08:55:15.983294   24108 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 08:55:15.983366   24108 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 08:55:15.983434   24108 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508222077s
	I1227 08:55:15.983490   24108 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.795811505s
	I1227 08:55:15.983547   24108 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00280761s
	I1227 08:55:15.983634   24108 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 08:55:15.983743   24108 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 08:55:15.983806   24108 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 08:55:15.983962   24108 kubeadm.go:319] [mark-control-plane] Marking the node multinode-899276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 08:55:15.984029   24108 kubeadm.go:319] [bootstrap-token] Using token: 8gubmu.jzeht1x7riked3vp
	I1227 08:55:15.985339   24108 out.go:252]   - Configuring RBAC rules ...
	I1227 08:55:15.985468   24108 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 08:55:15.985590   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 08:55:15.985836   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 08:55:15.985963   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 08:55:15.986071   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 08:55:15.986140   24108 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 08:55:15.986233   24108 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 08:55:15.986269   24108 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 08:55:15.986315   24108 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 08:55:15.986323   24108 kubeadm.go:319] 
	I1227 08:55:15.986381   24108 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 08:55:15.986390   24108 kubeadm.go:319] 
	I1227 08:55:15.986465   24108 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 08:55:15.986474   24108 kubeadm.go:319] 
	I1227 08:55:15.986507   24108 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 08:55:15.986576   24108 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 08:55:15.986650   24108 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 08:55:15.986662   24108 kubeadm.go:319] 
	I1227 08:55:15.986752   24108 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 08:55:15.986762   24108 kubeadm.go:319] 
	I1227 08:55:15.986803   24108 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 08:55:15.986808   24108 kubeadm.go:319] 
	I1227 08:55:15.986860   24108 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 08:55:15.986924   24108 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 08:55:15.986987   24108 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 08:55:15.986995   24108 kubeadm.go:319] 
	I1227 08:55:15.987083   24108 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 08:55:15.987152   24108 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 08:55:15.987157   24108 kubeadm.go:319] 
	I1227 08:55:15.987230   24108 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
	I1227 08:55:15.987318   24108 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c \
	I1227 08:55:15.987337   24108 kubeadm.go:319] 	--control-plane 
	I1227 08:55:15.987343   24108 kubeadm.go:319] 
	I1227 08:55:15.987420   24108 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 08:55:15.987428   24108 kubeadm.go:319] 
	I1227 08:55:15.987499   24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
	I1227 08:55:15.987622   24108 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c 
	I1227 08:55:15.987640   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:55:15.987649   24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1227 08:55:15.989869   24108 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 08:55:15.990980   24108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 08:55:15.997094   24108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 08:55:15.997119   24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 08:55:16.018807   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 08:55:16.327079   24108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 08:55:16.327141   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:16.327146   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276 minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=true
	I1227 08:55:16.365159   24108 ops.go:34] apiserver oom_adj: -16
	I1227 08:55:16.465863   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:16.966866   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:17.466570   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:17.966578   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:18.466519   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:18.966943   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:19.466148   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:19.966252   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:20.466874   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:20.559551   24108 kubeadm.go:1114] duration metric: took 4.232470194s to wait for elevateKubeSystemPrivileges
	I1227 08:55:20.559594   24108 kubeadm.go:403] duration metric: took 13.890642839s to StartCluster
	I1227 08:55:20.559615   24108 settings.go:142] acquiring lock: {Name:mk44fcba3019847ba7794682dc7fa5d4c6839e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:20.559700   24108 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:55:20.560349   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/kubeconfig: {Name:mk9f130990d4b2bd0dfe5788b549d55d90047007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:20.560606   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 08:55:20.560624   24108 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 08:55:20.560698   24108 addons.go:70] Setting storage-provisioner=true in profile "multinode-899276"
	I1227 08:55:20.560599   24108 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 08:55:20.560734   24108 addons.go:70] Setting default-storageclass=true in profile "multinode-899276"
	I1227 08:55:20.560754   24108 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "multinode-899276"
	I1227 08:55:20.560889   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:20.560722   24108 addons.go:239] Setting addon storage-provisioner=true in "multinode-899276"
	I1227 08:55:20.560976   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:55:20.563353   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:20.563858   24108 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 08:55:20.563881   24108 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 08:55:20.563887   24108 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 08:55:20.563895   24108 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 08:55:20.563910   24108 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 08:55:20.563922   24108 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 08:55:20.563927   24108 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 08:55:20.564267   24108 addons.go:239] Setting addon default-storageclass=true in "multinode-899276"
	I1227 08:55:20.564296   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:55:20.566001   24108 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 08:55:20.566022   24108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 08:55:20.566660   24108 out.go:179] * Verifying Kubernetes components...
	I1227 08:55:20.566668   24108 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 08:55:20.568005   24108 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 08:55:20.568024   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:20.568027   24108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 08:55:20.568764   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.569218   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:20.569253   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.569506   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:55:20.570678   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.571119   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:20.571146   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.571271   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:55:20.721800   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 08:55:20.853268   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:55:21.022237   24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 08:55:21.022257   24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 08:55:21.456081   24108 start.go:987] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1227 08:55:21.456682   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:21.456749   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:21.457033   24108 node_ready.go:35] waiting up to 6m0s for node "multinode-899276" to be "Ready" ...
	I1227 08:55:21.828507   24108 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 08:55:21.829821   24108 addons.go:530] duration metric: took 1.269198648s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 08:55:21.962140   24108 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-899276" context rescaled to 1 replicas
	W1227 08:55:23.460520   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:25.461678   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:27.960886   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:30.459943   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:32.460468   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:34.460900   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:36.960939   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:39.460258   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	I1227 08:55:40.960160   24108 node_ready.go:49] node "multinode-899276" is "Ready"
	I1227 08:55:40.960196   24108 node_ready.go:38] duration metric: took 19.503123053s for node "multinode-899276" to be "Ready" ...
	I1227 08:55:40.960216   24108 api_server.go:52] waiting for apiserver process to appear ...
	I1227 08:55:40.960272   24108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:55:40.980487   24108 api_server.go:72] duration metric: took 20.419735752s to wait for apiserver process to appear ...
	I1227 08:55:40.980522   24108 api_server.go:88] waiting for apiserver healthz status ...
	I1227 08:55:40.980545   24108 api_server.go:299] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1227 08:55:40.985397   24108 api_server.go:325] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1227 08:55:40.986902   24108 api_server.go:141] control plane version: v1.35.0
	I1227 08:55:40.986929   24108 api_server.go:131] duration metric: took 6.398762ms to wait for apiserver health ...
	I1227 08:55:40.986938   24108 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 08:55:40.990608   24108 system_pods.go:59] 8 kube-system pods found
	I1227 08:55:40.990654   24108 system_pods.go:61] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:40.990664   24108 system_pods.go:61] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:40.990674   24108 system_pods.go:61] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:40.990682   24108 system_pods.go:61] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:40.990688   24108 system_pods.go:61] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:40.990698   24108 system_pods.go:61] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:40.990703   24108 system_pods.go:61] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:40.990715   24108 system_pods.go:61] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:40.990723   24108 system_pods.go:74] duration metric: took 3.778634ms to wait for pod list to return data ...
	I1227 08:55:40.990733   24108 default_sa.go:34] waiting for default service account to be created ...
	I1227 08:55:40.993709   24108 default_sa.go:45] found service account: "default"
	I1227 08:55:40.993729   24108 default_sa.go:55] duration metric: took 2.988456ms for default service account to be created ...
	I1227 08:55:40.993736   24108 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 08:55:40.996625   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:40.996661   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:40.996672   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:40.996683   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:40.996690   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:40.996698   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:40.996709   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:40.996716   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:40.996727   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:40.996757   24108 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 08:55:41.222991   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.223041   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:41.223072   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.223082   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.223088   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.223095   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.223101   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.223107   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.223115   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:41.595420   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.595456   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:41.595463   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.595468   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.595472   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.595476   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.595479   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.595482   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.595487   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:41.921377   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.921417   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Running
	I1227 08:55:41.921426   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.921432   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.921437   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.921443   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.921448   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.921453   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.921458   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Running
	I1227 08:55:41.921468   24108 system_pods.go:126] duration metric: took 927.725772ms to wait for k8s-apps to be running ...
	I1227 08:55:41.921482   24108 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 08:55:41.921538   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:55:41.943521   24108 system_svc.go:56] duration metric: took 22.03282ms WaitForService to wait for kubelet
	I1227 08:55:41.943547   24108 kubeadm.go:587] duration metric: took 21.382801319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 08:55:41.943563   24108 node_conditions.go:102] verifying NodePressure condition ...
	I1227 08:55:41.946923   24108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1227 08:55:41.946949   24108 node_conditions.go:123] node cpu capacity is 2
	I1227 08:55:41.946964   24108 node_conditions.go:105] duration metric: took 3.396847ms to run NodePressure ...
	I1227 08:55:41.946975   24108 start.go:242] waiting for startup goroutines ...
	I1227 08:55:41.946982   24108 start.go:247] waiting for cluster config update ...
	I1227 08:55:41.946995   24108 start.go:256] writing updated cluster config ...
	I1227 08:55:41.949394   24108 out.go:203] 
	I1227 08:55:41.951062   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:41.951143   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:41.952889   24108 out.go:179] * Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
	I1227 08:55:41.954248   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:55:41.954267   24108 cache.go:65] Caching tarball of preloaded images
	I1227 08:55:41.954391   24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 08:55:41.954406   24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 08:55:41.954483   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:41.954681   24108 start.go:360] acquireMachinesLock for multinode-899276-m02: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 08:55:41.954734   24108 start.go:364] duration metric: took 30.88µs to acquireMachinesLock for "multinode-899276-m02"
	I1227 08:55:41.954766   24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1227 08:55:41.954827   24108 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1227 08:55:41.956569   24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 08:55:41.956662   24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
	I1227 08:55:41.956692   24108 client.go:173] LocalClient.Create starting
	I1227 08:55:41.956761   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
	I1227 08:55:41.956803   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:55:41.956824   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:55:41.956873   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
	I1227 08:55:41.956892   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:55:41.956910   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:55:41.957088   24108 main.go:144] libmachine: creating domain...
	I1227 08:55:41.957098   24108 main.go:144] libmachine: creating network...
	I1227 08:55:41.958253   24108 main.go:144] libmachine: found existing default network
	I1227 08:55:41.958505   24108 main.go:144] libmachine: <network connections='1'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:55:41.958687   24108 main.go:144] libmachine: found existing mk-multinode-899276 private network, skipping creation
	I1227 08:55:41.958885   24108 main.go:144] libmachine: <network>
	  <name>mk-multinode-899276</name>
	  <uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7e:96:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	      <host mac='52:54:00:4c:5c:b4' name='multinode-899276' ip='192.168.39.24'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:55:41.959076   24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
	I1227 08:55:41.959099   24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:55:41.959107   24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:55:41.959186   24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
	I1227 08:55:42.180540   24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa...
	I1227 08:55:42.254861   24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk...
	I1227 08:55:42.254917   24108 main.go:144] libmachine: Writing magic tar header
	I1227 08:55:42.254943   24108 main.go:144] libmachine: Writing SSH key tar header
	I1227 08:55:42.255061   24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
	I1227 08:55:42.255137   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02
	I1227 08:55:42.255165   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 (perms=drwx------)
	I1227 08:55:42.255182   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
	I1227 08:55:42.255201   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
	I1227 08:55:42.255216   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:55:42.255227   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
	I1227 08:55:42.255238   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
	I1227 08:55:42.255257   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
	I1227 08:55:42.255282   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1227 08:55:42.255298   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1227 08:55:42.255318   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1227 08:55:42.255333   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1227 08:55:42.255348   24108 main.go:144] libmachine: checking permissions on dir: /home
	I1227 08:55:42.255359   24108 main.go:144] libmachine: skipping /home - not owner
	I1227 08:55:42.255363   24108 main.go:144] libmachine: defining domain...
	I1227 08:55:42.256580   24108 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>multinode-899276-m02</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:55:42.265000   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:b3:04:b6 in network default
	I1227 08:55:42.265650   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:42.265669   24108 main.go:144] libmachine: starting domain...
	I1227 08:55:42.265674   24108 main.go:144] libmachine: ensuring networks are active...
	I1227 08:55:42.266690   24108 main.go:144] libmachine: Ensuring network default is active
	I1227 08:55:42.267245   24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
	I1227 08:55:42.267833   24108 main.go:144] libmachine: getting domain XML...
	I1227 08:55:42.269145   24108 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-899276-m02</name>
	  <uuid>08f0927e-00b1-40b5-b768-ac07d0776e28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:9b:0b:64'/>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b3:04:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:55:43.575420   24108 main.go:144] libmachine: waiting for domain to start...
	I1227 08:55:43.576915   24108 main.go:144] libmachine: domain is now running
	I1227 08:55:43.576935   24108 main.go:144] libmachine: waiting for IP...
	I1227 08:55:43.577720   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:43.578257   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:43.578273   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:43.578564   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:43.833127   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:43.833729   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:43.833744   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:43.834083   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.161636   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.162394   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.162413   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.162749   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.477602   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.478263   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.478282   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.478685   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.857427   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.858004   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.858026   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.858397   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:45.619396   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:45.619938   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:45.619953   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:45.620268   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:46.214206   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:46.214738   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:46.214760   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:46.215107   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:47.368589   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:47.369148   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:47.369169   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:47.369473   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:48.790105   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:48.790775   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:48.790792   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:48.791137   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:50.057612   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:50.058205   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:50.058230   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:50.058563   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:51.571769   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:51.572501   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:51.572522   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:51.572969   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:54.369906   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:54.370596   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:54.370610   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:54.370961   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:57.241023   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.241672   24108 main.go:144] libmachine: domain multinode-899276-m02 has current primary IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.241689   24108 main.go:144] libmachine: found domain IP: 192.168.39.160
	I1227 08:55:57.241696   24108 main.go:144] libmachine: reserving static IP address...
	I1227 08:55:57.242083   24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276-m02", mac: "52:54:00:9b:0b:64", ip: "192.168.39.160"} in network mk-multinode-899276
	I1227 08:55:57.450637   24108 main.go:144] libmachine: reserved static IP address 192.168.39.160 for domain multinode-899276-m02
	I1227 08:55:57.450661   24108 main.go:144] libmachine: waiting for SSH...
	I1227 08:55:57.450668   24108 main.go:144] libmachine: Getting to WaitForSSH function...
	I1227 08:55:57.453744   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.454265   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.454291   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.454489   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.454732   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.454744   24108 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1227 08:55:57.569604   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:55:57.570099   24108 main.go:144] libmachine: domain creation complete
	I1227 08:55:57.571770   24108 machine.go:94] provisionDockerMachine start ...
	I1227 08:55:57.574152   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.574608   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.574633   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.574862   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.575132   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.575147   24108 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 08:55:57.686687   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1227 08:55:57.686742   24108 buildroot.go:166] provisioning hostname "multinode-899276-m02"
	I1227 08:55:57.689982   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.690439   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.690482   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.690712   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.690987   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.691006   24108 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-899276-m02 && echo "multinode-899276-m02" | sudo tee /etc/hostname
	I1227 08:55:57.825642   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276-m02
	
	I1227 08:55:57.828982   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.829434   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.829471   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.829664   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.829868   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.829883   24108 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899276-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899276-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 08:55:57.955353   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:55:57.955387   24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
	I1227 08:55:57.955404   24108 buildroot.go:174] setting up certificates
	I1227 08:55:57.955412   24108 provision.go:84] configureAuth start
	I1227 08:55:57.958329   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.958721   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.958743   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961212   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961604   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.961634   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961769   24108 provision.go:143] copyHostCerts
	I1227 08:55:57.961801   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:55:57.961840   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
	I1227 08:55:57.961853   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:55:57.961943   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
	I1227 08:55:57.962064   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:55:57.962093   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
	I1227 08:55:57.962101   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:55:57.962149   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
	I1227 08:55:57.962220   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:55:57.962245   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
	I1227 08:55:57.962253   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:55:57.962290   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
	I1227 08:55:57.962357   24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276-m02 san=[127.0.0.1 192.168.39.160 localhost minikube multinode-899276-m02]
	I1227 08:55:58.062355   24108 provision.go:177] copyRemoteCerts
	I1227 08:55:58.062418   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 08:55:58.065702   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.066127   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.066154   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.066319   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:58.156852   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 08:55:58.156925   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 08:55:58.186973   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 08:55:58.187035   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1227 08:55:58.216314   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 08:55:58.216378   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 08:55:58.250146   24108 provision.go:87] duration metric: took 294.721391ms to configureAuth
	I1227 08:55:58.250177   24108 buildroot.go:189] setting minikube options for container-runtime
	I1227 08:55:58.250357   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:58.252989   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.253461   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.253487   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.253690   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.253921   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.253934   24108 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 08:55:58.373697   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 08:55:58.373723   24108 buildroot.go:70] root file system type: tmpfs
	I1227 08:55:58.373873   24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 08:55:58.376713   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.377114   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.377139   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.377329   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.377512   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.377555   24108 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.39.24"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 08:55:58.508330   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.39.24
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 08:55:58.511413   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.511851   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.511879   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.512069   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.512332   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.512351   24108 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 08:55:59.431853   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1227 08:55:59.431877   24108 machine.go:97] duration metric: took 1.86008098s to provisionDockerMachine
	I1227 08:55:59.431888   24108 client.go:176] duration metric: took 17.475186189s to LocalClient.Create
	I1227 08:55:59.431902   24108 start.go:167] duration metric: took 17.47524121s to libmachine.API.Create "multinode-899276"
	I1227 08:55:59.431909   24108 start.go:293] postStartSetup for "multinode-899276-m02" (driver="kvm2")
	I1227 08:55:59.431918   24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 08:55:59.431968   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 08:55:59.434620   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.435132   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.435167   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.435355   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:59.525674   24108 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 08:55:59.530511   24108 info.go:137] Remote host: Buildroot 2025.02
	I1227 08:55:59.530547   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
	I1227 08:55:59.530632   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
	I1227 08:55:59.530706   24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
	I1227 08:55:59.530716   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
	I1227 08:55:59.530821   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 08:55:59.542821   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:55:59.573575   24108 start.go:296] duration metric: took 141.651568ms for postStartSetup
	I1227 08:55:59.576745   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.577190   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.577225   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.577486   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:59.577738   24108 start.go:128] duration metric: took 17.622900484s to createHost
	I1227 08:55:59.579881   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.580246   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.580267   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.580524   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:59.580736   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:59.580748   24108 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 08:55:59.695810   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825759.656998713
	
	I1227 08:55:59.695838   24108 fix.go:216] guest clock: 1766825759.656998713
	I1227 08:55:59.695847   24108 fix.go:229] Guest: 2025-12-27 08:55:59.656998713 +0000 UTC Remote: 2025-12-27 08:55:59.577753428 +0000 UTC m=+82.275426938 (delta=79.245285ms)
	I1227 08:55:59.695869   24108 fix.go:200] guest clock delta is within tolerance: 79.245285ms
	I1227 08:55:59.695877   24108 start.go:83] releasing machines lock for "multinode-899276-m02", held for 17.741133225s
	I1227 08:55:59.698823   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.699365   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.699403   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.701968   24108 out.go:179] * Found network options:
	I1227 08:55:59.703396   24108 out.go:179]   - NO_PROXY=192.168.39.24
	W1227 08:55:59.704647   24108 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 08:55:59.705042   24108 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 08:55:59.705131   24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1227 08:55:59.705131   24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 08:55:59.708339   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708387   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708760   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.708817   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.708844   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708889   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.709024   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:59.709228   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	W1227 08:55:59.793520   24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 08:55:59.793609   24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 08:55:59.816238   24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 08:55:59.816269   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:55:59.816301   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:55:59.816397   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:55:59.839936   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 08:55:59.852570   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 08:55:59.865005   24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 08:55:59.865103   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 08:55:59.877853   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:55:59.890799   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 08:55:59.903794   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:55:59.916281   24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 08:55:59.929816   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 08:55:59.942187   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 08:55:59.955245   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 08:55:59.968552   24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 08:55:59.979484   24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 08:55:59.979563   24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 08:55:59.993561   24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 08:56:00.006240   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:00.152118   24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 08:56:00.190124   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:56:00.190172   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:56:00.190230   24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 08:56:00.211952   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:56:00.237208   24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 08:56:00.259010   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:56:00.275879   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:56:00.293605   24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 08:56:00.326414   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:56:00.342364   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:56:00.365931   24108 ssh_runner.go:195] Run: which cri-dockerd
	I1227 08:56:00.370257   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 08:56:00.382716   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 08:56:00.404739   24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 08:56:00.548335   24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 08:56:00.689510   24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 08:56:00.689570   24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 08:56:00.729510   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:56:00.746884   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:00.890844   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:56:01.355108   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 08:56:01.370599   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 08:56:01.386540   24108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1227 08:56:01.404096   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:56:01.419794   24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 08:56:01.561520   24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 08:56:01.708164   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:01.863090   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 08:56:01.899043   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 08:56:01.915288   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:02.062800   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 08:56:02.174498   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:56:02.198066   24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 08:56:02.198172   24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 08:56:02.204239   24108 start.go:574] Will wait 60s for crictl version
	I1227 08:56:02.204318   24108 ssh_runner.go:195] Run: which crictl
	I1227 08:56:02.208415   24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 08:56:02.242462   24108 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1227 08:56:02.242547   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:56:02.272210   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:56:02.305864   24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1227 08:56:02.307155   24108 out.go:179]   - env NO_PROXY=192.168.39.24
	I1227 08:56:02.310958   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:56:02.311334   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:56:02.311356   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:56:02.311519   24108 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 08:56:02.316034   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:56:02.330706   24108 mustload.go:66] Loading cluster: multinode-899276
	I1227 08:56:02.330927   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:56:02.332363   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:56:02.332574   24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.160
	I1227 08:56:02.332593   24108 certs.go:195] generating shared ca certs ...
	I1227 08:56:02.332615   24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:56:02.332749   24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
	I1227 08:56:02.332808   24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
	I1227 08:56:02.332826   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 08:56:02.332851   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 08:56:02.332871   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 08:56:02.332887   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 08:56:02.332965   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
	W1227 08:56:02.333010   24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
	I1227 08:56:02.333027   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 08:56:02.333079   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
	I1227 08:56:02.333119   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
	I1227 08:56:02.333153   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
	I1227 08:56:02.333216   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:56:02.333264   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.333285   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.333302   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.333328   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 08:56:02.365645   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 08:56:02.395629   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 08:56:02.425519   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 08:56:02.455554   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 08:56:02.486238   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
	I1227 08:56:02.515842   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
	I1227 08:56:02.545758   24108 ssh_runner.go:195] Run: openssl version
	I1227 08:56:02.552395   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.564618   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
	I1227 08:56:02.577235   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.582685   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.582759   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.590482   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 08:56:02.601896   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
	I1227 08:56:02.613606   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.625518   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 08:56:02.637508   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.642823   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.642901   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.650764   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 08:56:02.663547   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 08:56:02.675853   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.688458   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
	I1227 08:56:02.701658   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.706958   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.707033   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.714242   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 08:56:02.726789   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
	I1227 08:56:02.740816   24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 08:56:02.745870   24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 08:56:02.745924   24108 kubeadm.go:935] updating node {m02 192.168.39.160 8443 v1.35.0 docker false true} ...
	I1227 08:56:02.746010   24108 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 08:56:02.746115   24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 08:56:02.758129   24108 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 08:56:02.758244   24108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 08:56:02.770426   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1227 08:56:02.770451   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1227 08:56:02.770474   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:56:02.770479   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm -> /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 08:56:02.770428   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1227 08:56:02.770532   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 08:56:02.770547   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl -> /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 08:56:02.770638   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 08:56:02.775599   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 08:56:02.775636   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1227 08:56:02.800423   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet -> /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 08:56:02.800448   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 08:56:02.800474   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1227 08:56:02.800530   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 08:56:02.847555   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 08:56:02.847596   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1227 08:56:03.589571   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 08:56:03.603768   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1227 08:56:03.631212   24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 08:56:03.655890   24108 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1227 08:56:03.660915   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:56:03.680065   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:03.823402   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:56:03.862307   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:56:03.862561   24108 start.go:318] joinCluster: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:56:03.862676   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1227 08:56:03.865388   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:56:03.865858   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:56:03.865900   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:56:03.866073   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:56:04.026904   24108 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1227 08:56:04.027011   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9k0kod.6geqtmlyqvlg3686 --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-899276-m02"
	I1227 08:56:04.959833   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1227 08:56:05.276831   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false
	I1227 08:56:05.365119   24108 start.go:320] duration metric: took 1.502556165s to joinCluster
	I1227 08:56:05.367341   24108 out.go:203] 
	W1227 08:56:05.368707   24108 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-899276-m02" not found
	
	W1227 08:56:05.368724   24108 out.go:285] * 
	W1227 08:56:05.369029   24108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 08:56:05.370349   24108 out.go:203] 
	
	
	==> Docker <==
	Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.484295147Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.484309203Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.498172293Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 27 08:55:04 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:04.998776948Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.109632332Z" level=info msg="Loading containers: start."
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.247245769Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.377426026Z" level=info msg="Loading containers: done."
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.391637269Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.391811290Z" level=info msg="Initializing buildkit"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.413046081Z" level=info msg="Completed buildkit initialization"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419503264Z" level=info msg="Daemon has completed initialization"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419576305Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419733300Z" level=info msg="API listen on /run/docker.sock"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419775153Z" level=info msg="API listen on [::]:2376"
	Dec 27 08:55:05 multinode-899276 systemd[1]: Started Docker Application Container Engine.
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6e78e0ce85e8fe5edb8277132aa64d3c6e7b854ca063f186efe83036788a703/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84314fd3b6e4330cc6b60d3efa4271b1b31c8f7297dbc6f7810f7d4222821a3c/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/01c9987cccbc7847d3b2300457909a1b20a5c3ab68ebdcb2787f46b9223e82fe/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e30cff9be5d8f21e22f56e32fdf4665f38efb1df6a4b4088fd9482e8e3f11b25/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:19 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 27 08:55:21 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d6ec4f5debfedd33fc26996965caee4b0790894833f749df68708096cc935f1/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:21 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b5c9d2f69beb277a5fa8a92c4c1be6942492e1323ecd969f21893fb56053bd2/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:25 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:25Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88: Status: Downloaded newer image for kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"
	Dec 27 08:55:41 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0e0169a737f1b2eff8f1daf82ec9040343a68bccda0dbcd16c6ebd9a120493b2/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:41 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bed607b026b4fde1069a1cde835d4fb71c333fa7c430321acf31a9a7b911f0b/resolv.conf as [nameserver 192.168.122.1]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	6895d0c824741       aa5e3ebc0dfed                                                                              25 seconds ago      Running             coredns                   0                   0e0169a737f1b       coredns-7d764666f9-952ns                   kube-system
	12a2f3326d0f4       6e38f40d628db                                                                              25 seconds ago      Running             storage-provisioner       0                   5bed607b026b4       storage-provisioner                        kube-system
	a7b61d118b3f1       kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae   41 seconds ago      Running             kindnet-cni               0                   4b5c9d2f69beb       kindnet-mgnsl                              kube-system
	d50ff81fb41a6       32652ff1bbe6b                                                                              45 seconds ago      Running             kube-proxy                0                   4d6ec4f5debfe       kube-proxy-rrb2x                           kube-system
	806a4f701d170       2c9a4b058bd7e                                                                              56 seconds ago      Running             kube-controller-manager   0                   e30cff9be5d8f       kube-controller-manager-multinode-899276   kube-system
	8f2fcc85e5e1f       550794e3b12ac                                                                              56 seconds ago      Running             kube-scheduler            0                   01c9987cccbc7       kube-scheduler-multinode-899276            kube-system
	14fb1b4cc933a       5c6acd67e9cd1                                                                              56 seconds ago      Running             kube-apiserver            0                   84314fd3b6e43       kube-apiserver-multinode-899276            kube-system
	4ca9b8bb650e0       0a108f7189562                                                                              56 seconds ago      Running             etcd                      0                   d6e78e0ce85e8       etcd-multinode-899276                      kube-system
	
	
	==> coredns [6895d0c82474] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51366 - 39875 "HINFO IN 597089617242721093.8521952542865293643. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.126758929s
	
	
	==> describe nodes <==
	Name:               multinode-899276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=multinode-899276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 08:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899276
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 08:55:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 08:55:46 +0000   Sat, 27 Dec 2025 08:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 08:55:46 +0000   Sat, 27 Dec 2025 08:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 08:55:46 +0000   Sat, 27 Dec 2025 08:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 08:55:46 +0000   Sat, 27 Dec 2025 08:55:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    multinode-899276
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d370929938249538ba64fb6eca3e648
	  System UUID:                6d370929-9382-4953-8ba6-4fb6eca3e648
	  Boot ID:                    e7571780-ff7a-4d59-887f-f7dbfc0c1beb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-952ns                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     46s
	  kube-system                 etcd-multinode-899276                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         53s
	  kube-system                 kindnet-mgnsl                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      46s
	  kube-system                 kube-apiserver-multinode-899276             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-multinode-899276    200m (10%)    0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-proxy-rrb2x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-scheduler-multinode-899276             100m (5%)     0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  47s   node-controller  Node multinode-899276 event: Registered Node multinode-899276 in Controller
	
	
	Name:               multinode-899276-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899276-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 08:56:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-899276-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 08:56:05 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 08:56:05 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 08:56:05 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 08:56:05 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletNotReady              [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, CSINode is not yet initialized]
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    multinode-899276-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 08f0927e00b140b5b768ac07d0776e28
	  System UUID:                08f0927e-00b1-40b5-b768-ac07d0776e28
	  Boot ID:                    1d4ac048-9867-48e6-96eb-9e9bc0666768
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4pk8r       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      1s
	  kube-system                 kube-proxy-xhrn8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:              <none>
	
	
	==> dmesg <==
	[Dec27 08:54] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001306] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.170243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.117819] kauditd_printk_skb: 1 callbacks suppressed
	[Dec27 08:55] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.102827] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.160897] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.244934] kauditd_printk_skb: 18 callbacks suppressed
	[  +4.325682] kauditd_printk_skb: 165 callbacks suppressed
	[ +14.621191] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4ca9b8bb650e] <==
	{"level":"info","ts":"2025-12-27T08:55:10.784336Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2025-12-27T08:55:10.784372Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"602226ed500416f5 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T08:55:10.785974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"602226ed500416f5 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T08:55:10.786004Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2025-12-27T08:55:10.791476Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.793884Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:multinode-899276 ClientURLs:[https://192.168.39.24:2379]}","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T08:55:10.794043Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T08:55:10.793909Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T08:55:10.795404Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T08:55:10.799763Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T08:55:10.802567Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T08:55:10.802644Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T08:55:10.804819Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.805072Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.805735Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.805926Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T08:55:10.807174Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T08:55:10.815395Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2025-12-27T08:55:10.816576Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-12-27T08:56:04.877177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.572626ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T08:56:04.877302Z","caller":"traceutil/trace.go:172","msg":"trace[1557608848] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:454; }","duration":"205.776267ms","start":"2025-12-27T08:56:04.671511Z","end":"2025-12-27T08:56:04.877287Z","steps":["trace[1557608848] 'range keys from in-memory index tree'  (duration: 205.559438ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T08:56:04.877487Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.4767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T08:56:04.877538Z","caller":"traceutil/trace.go:172","msg":"trace[1875828016] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:454; }","duration":"245.507791ms","start":"2025-12-27T08:56:04.631992Z","end":"2025-12-27T08:56:04.877500Z","steps":["trace[1875828016] 'agreement among raft nodes before linearized reading'  (duration: 92.674358ms)","trace[1875828016] 'range keys from in-memory index tree'  (duration: 152.742931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T08:56:04.878377Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.056298ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654399270533750011 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-tz6w5\" mod_revision:454 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-tz6w5\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-tz6w5\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-27T08:56:04.878902Z","caller":"traceutil/trace.go:172","msg":"trace[1051096777] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"247.420304ms","start":"2025-12-27T08:56:04.631468Z","end":"2025-12-27T08:56:04.878888Z","steps":["trace[1051096777] 'process raft request'  (duration: 93.279326ms)","trace[1051096777] 'compare'  (duration: 152.870907ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:56:06 up 1 min,  0 users,  load average: 0.95, 0.35, 0.12
	Linux multinode-899276 6.6.95 #1 SMP PREEMPT_DYNAMIC Fri Dec 26 06:43:12 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [a7b61d118b3f] <==
	I1227 08:55:25.911665       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 08:55:25.912075       1 main.go:139] hostIP = 192.168.39.24
	podIP = 192.168.39.24
	I1227 08:55:25.912269       1 main.go:148] setting mtu 1500 for CNI 
	I1227 08:55:25.912304       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 08:55:25.912324       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T08:55:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 08:55:26.215408       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 08:55:26.215439       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 08:55:26.215448       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 08:55:26.216460       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 08:55:26.606893       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 08:55:26.606942       1 metrics.go:72] Registering metrics
	I1227 08:55:26.607009       1 controller.go:711] "Syncing nftables rules"
	I1227 08:55:36.214731       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:55:36.214869       1 main.go:301] handling current node
	I1227 08:55:46.214591       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:55:46.214648       1 main.go:301] handling current node
	I1227 08:55:56.217888       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:55:56.217992       1 main.go:301] handling current node
	I1227 08:56:06.214496       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:56:06.214540       1 main.go:301] handling current node
	I1227 08:56:06.214556       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1227 08:56:06.214568       1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24] 
	I1227 08:56:06.214996       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.160 Flags: [] Table: 0 Realm: 0} 
	
	
	==> kube-apiserver [14fb1b4cc933] <==
	I1227 08:55:12.412187       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 08:55:12.412274       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 08:55:12.415410       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 08:55:12.422763       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 08:55:12.473480       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 08:55:12.477513       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 08:55:12.497235       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 08:55:12.504256       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 08:55:13.220614       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 08:55:13.225535       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 08:55:13.225752       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 08:55:13.980887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 08:55:14.037402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 08:55:14.121453       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 08:55:14.128526       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.24]
	I1227 08:55:14.129442       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 08:55:14.135088       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 08:55:14.269225       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 08:55:15.386610       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 08:55:15.428640       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 08:55:15.441371       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 08:55:19.919728       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 08:55:20.223365       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 08:55:20.228936       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 08:55:20.270234       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [806a4f701d17] <==
	I1227 08:55:19.089146       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.106770       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.123368       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.129444       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.129530       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.151864       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.155426       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.155501       1 range_allocator.go:177] "Sending events to api server"
	I1227 08:55:19.155519       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.155544       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 08:55:19.155550       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 08:55:19.155554       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.155636       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.167077       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276" podCIDRs=["10.244.0.0/24"]
	I1227 08:55:19.172639       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.176175       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.176447       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.179607       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.196290       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.208898       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.208913       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 08:55:19.208917       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 08:55:44.094465       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 08:56:05.429119       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899276-m02\" does not exist"
	I1227 08:56:05.458174       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276-m02" podCIDRs=["10.244.1.0/24"]
	
	
	==> kube-proxy [d50ff81fb41a] <==
	I1227 08:55:21.628068       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 08:55:21.731947       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:21.731996       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1227 08:55:21.739671       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 08:55:21.830226       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1227 08:55:21.830342       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1227 08:55:21.830404       1 server_linux.go:136] "Using iptables Proxier"
	I1227 08:55:21.839592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 08:55:21.840293       1 server.go:529] "Version info" version="v1.35.0"
	I1227 08:55:21.840321       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 08:55:21.842846       1 config.go:200] "Starting service config controller"
	I1227 08:55:21.842864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 08:55:21.842880       1 config.go:106] "Starting endpoint slice config controller"
	I1227 08:55:21.842884       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 08:55:21.842909       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 08:55:21.842915       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 08:55:21.846740       1 config.go:309] "Starting node config controller"
	I1227 08:55:21.846890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 08:55:21.942963       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 08:55:21.943020       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 08:55:21.943138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 08:55:21.948504       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [8f2fcc85e5e1] <==
	E1227 08:55:12.377527       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 08:55:12.379893       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 08:55:12.380089       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 08:55:12.380428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 08:55:12.381099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 08:55:12.381174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 08:55:12.384043       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 08:55:12.384255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 08:55:13.242305       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 08:55:13.257422       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 08:55:13.303156       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 08:55:13.319157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 08:55:13.362023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 08:55:13.362795       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 08:55:13.411755       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 08:55:13.420451       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 08:55:13.431365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 08:55:13.480845       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 08:55:13.542450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 08:55:13.554908       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 08:55:13.560944       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 08:55:13.650997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 08:55:13.693380       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 08:55:13.694477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1227 08:55:16.332120       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354785    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a93db4ef-7986-43f9-820c-2b117c90fd1a-lib-modules\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354867    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xjf\" (UniqueName: \"kubernetes.io/projected/7ca87068-e672-4641-bc6e-b04591e75a10-kube-api-access-m8xjf\") pod \"kindnet-mgnsl\" (UID: \"7ca87068-e672-4641-bc6e-b04591e75a10\") " pod="kube-system/kindnet-mgnsl"
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354890    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a93db4ef-7986-43f9-820c-2b117c90fd1a-xtables-lock\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354942    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7ca87068-e672-4641-bc6e-b04591e75a10-cni-cfg\") pod \"kindnet-mgnsl\" (UID: \"7ca87068-e672-4641-bc6e-b04591e75a10\") " pod="kube-system/kindnet-mgnsl"
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.354968    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca87068-e672-4641-bc6e-b04591e75a10-lib-modules\") pod \"kindnet-mgnsl\" (UID: \"7ca87068-e672-4641-bc6e-b04591e75a10\") " pod="kube-system/kindnet-mgnsl"
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.355059    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a93db4ef-7986-43f9-820c-2b117c90fd1a-kube-proxy\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
	Dec 27 08:55:20 multinode-899276 kubelet[2549]: I1227 08:55:20.355121    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwv8\" (UniqueName: \"kubernetes.io/projected/a93db4ef-7986-43f9-820c-2b117c90fd1a-kube-api-access-wnwv8\") pod \"kube-proxy-rrb2x\" (UID: \"a93db4ef-7986-43f9-820c-2b117c90fd1a\") " pod="kube-system/kube-proxy-rrb2x"
	Dec 27 08:55:22 multinode-899276 kubelet[2549]: E1227 08:55:22.165069    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-899276" containerName="etcd"
	Dec 27 08:55:22 multinode-899276 kubelet[2549]: I1227 08:55:22.182334    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rrb2x" podStartSLOduration=2.182320518 podStartE2EDuration="2.182320518s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:21.591305433 +0000 UTC m=+6.370036755" watchObservedRunningTime="2025-12-27 08:55:22.182320518 +0000 UTC m=+6.961051864"
	Dec 27 08:55:23 multinode-899276 kubelet[2549]: E1227 08:55:23.868801    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
	Dec 27 08:55:24 multinode-899276 kubelet[2549]: E1227 08:55:24.280199    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
	Dec 27 08:55:26 multinode-899276 kubelet[2549]: I1227 08:55:26.685630    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-mgnsl" podStartSLOduration=2.79150236 podStartE2EDuration="6.685618144s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="2025-12-27 08:55:21.301251881 +0000 UTC m=+6.079983198" lastFinishedPulling="2025-12-27 08:55:25.195367666 +0000 UTC m=+9.974098982" observedRunningTime="2025-12-27 08:55:26.683876008 +0000 UTC m=+11.462607343" watchObservedRunningTime="2025-12-27 08:55:26.685618144 +0000 UTC m=+11.464349467"
	Dec 27 08:55:28 multinode-899276 kubelet[2549]: E1227 08:55:28.767005    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-multinode-899276" containerName="kube-controller-manager"
	Dec 27 08:55:32 multinode-899276 kubelet[2549]: E1227 08:55:32.167933    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-899276" containerName="etcd"
	Dec 27 08:55:33 multinode-899276 kubelet[2549]: E1227 08:55:33.875439    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
	Dec 27 08:55:34 multinode-899276 kubelet[2549]: E1227 08:55:34.286744    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.671822    2549 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789814    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2dd7f649-dfe6-4a2d-b321-673b664a5d1b-tmp\") pod \"storage-provisioner\" (UID: \"2dd7f649-dfe6-4a2d-b321-673b664a5d1b\") " pod="kube-system/storage-provisioner"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789865    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9a3c2-20bf-4e86-8443-702c47b3e04b-config-volume\") pod \"coredns-7d764666f9-952ns\" (UID: \"f0e9a3c2-20bf-4e86-8443-702c47b3e04b\") " pod="kube-system/coredns-7d764666f9-952ns"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789892    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pql\" (UniqueName: \"kubernetes.io/projected/f0e9a3c2-20bf-4e86-8443-702c47b3e04b-kube-api-access-l7pql\") pod \"coredns-7d764666f9-952ns\" (UID: \"f0e9a3c2-20bf-4e86-8443-702c47b3e04b\") " pod="kube-system/coredns-7d764666f9-952ns"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789911    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcsm\" (UniqueName: \"kubernetes.io/projected/2dd7f649-dfe6-4a2d-b321-673b664a5d1b-kube-api-access-pxcsm\") pod \"storage-provisioner\" (UID: \"2dd7f649-dfe6-4a2d-b321-673b664a5d1b\") " pod="kube-system/storage-provisioner"
	Dec 27 08:55:41 multinode-899276 kubelet[2549]: E1227 08:55:41.773849    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	Dec 27 08:55:41 multinode-899276 kubelet[2549]: I1227 08:55:41.819800    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-952ns" podStartSLOduration=21.81978365 podStartE2EDuration="21.81978365s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:41.799893618 +0000 UTC m=+26.578624941" watchObservedRunningTime="2025-12-27 08:55:41.81978365 +0000 UTC m=+26.598514973"
	Dec 27 08:55:42 multinode-899276 kubelet[2549]: E1227 08:55:42.792462    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	Dec 27 08:55:43 multinode-899276 kubelet[2549]: E1227 08:55:43.808397    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	
	
	==> storage-provisioner [12a2f3326d0f] <==
	I1227 08:55:41.766523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-899276_a520e26b-0b55-4f68-b7fe-7e70bd195afc!
	W1227 08:55:43.681843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:43.691726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:45.695561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:45.705037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:47.709090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:47.714272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:49.718390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:49.724111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:51.732382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:51.747336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:53.753592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:53.760593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:55.768238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:55.781660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:57.787056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:57.793152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:59.799208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:55:59.806534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:56:01.811591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:56:01.822788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:56:03.826215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:56:03.831532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:56:05.835877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 08:56:05.840742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-899276 -n multinode-899276
helpers_test.go:270: (dbg) Run:  kubectl --context multinode-899276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: kindnet-4pk8r kube-proxy-xhrn8
helpers_test.go:283: ======> post-mortem[TestMultiNode/serial/FreshStart2Nodes]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context multinode-899276 describe pod kindnet-4pk8r kube-proxy-xhrn8
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context multinode-899276 describe pod kindnet-4pk8r kube-proxy-xhrn8: exit status 1 (74.410128ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kindnet-4pk8r" not found
	Error from server (NotFound): pods "kube-proxy-xhrn8" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context multinode-899276 describe pod kindnet-4pk8r kube-proxy-xhrn8: exit status 1
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (90.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (1.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-899276 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:239: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_12_27T08_55_16_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m02","kubernetes.io/os":"linux"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"false","minikube.k8s.io/upd
ated_at":"2025_12_27T08_56_56_0700","minikube.k8s.io/version":"v1.37.0"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_12_27T08_55_16_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m02","kubernetes.io/os":"linux"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"false","minikube.k8s.io/upd
ated_at":"2025_12_27T08_56_56_0700","minikube.k8s.io/version":"v1.37.0"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_12_27T08_55_16_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m02","kubernetes.io/os":"linux"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"false","minikube.k8s.io/upd
ated_at":"2025_12_27T08_56_56_0700","minikube.k8s.io/version":"v1.37.0"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_12_27T08_55_16_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m02","kubernetes.io/os":"linux"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"false","minikube.k8s.io/upd
ated_at":"2025_12_27T08_56_56_0700","minikube.k8s.io/version":"v1.37.0"},]

                                                
                                                
-- /stdout --
multinode_test.go:239: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	[{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_12_27T08_55_16_0700","minikube.k8s.io/version":"v1.37.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m02","kubernetes.io/os":"linux"},{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899276-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2daf445edf4872fd9586416ba5dbf507613db86","minikube.k8s.io/name":"multinode-899276","minikube.k8s.io/primary":"false","minikube.k8s.io/upd
ated_at":"2025_12_27T08_56_56_0700","minikube.k8s.io/version":"v1.37.0"},]

                                                
                                                
-- /stdout --
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-899276 -n multinode-899276
helpers_test.go:253: <<< TestMultiNode/serial/MultiNodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 logs -n 25
helpers_test.go:261: TestMultiNode/serial/MultiNodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                    ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ mount-start-2-834751 ssh -- findmnt --json /minikube-host                                                                                                                   │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ stop    │ -p mount-start-2-834751                                                                                                                                                     │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ start   │ -p mount-start-2-834751                                                                                                                                                     │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ mount   │ /tmp/TestMountStartserial2539336940/001:/minikube-host --profile mount-start-2-834751 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46465 --type 9p --uid 0 │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │                     │
	│ ssh     │ mount-start-2-834751 ssh -- ls /minikube-host                                                                                                                               │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ ssh     │ mount-start-2-834751 ssh -- findmnt --json /minikube-host                                                                                                                   │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ delete  │ -p mount-start-2-834751                                                                                                                                                     │ mount-start-2-834751 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ delete  │ -p mount-start-1-817954                                                                                                                                                     │ mount-start-1-817954 │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │ 27 Dec 25 08:54 UTC │
	│ start   │ -p multinode-899276 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2                                                                                │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:54 UTC │                     │
	│ kubectl │ -p multinode-899276 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml                                                                                           │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- rollout status deployment/busybox                                                                                                                    │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- get pods -o jsonpath='{.items[*].status.podIP}'                                                                                                      │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                                                     │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- nslookup kubernetes.io                                                                                              │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- nslookup kubernetes.io                                                                                              │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- nslookup kubernetes.default                                                                                         │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- nslookup kubernetes.default                                                                                         │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- nslookup kubernetes.default.svc.cluster.local                                                                       │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- nslookup kubernetes.default.svc.cluster.local                                                                       │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                                                     │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                                                 │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- sh -c ping -c 1 192.168.39.1                                                                                        │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                                                 │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ kubectl │ -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- sh -c ping -c 1 192.168.39.1                                                                                        │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:56 UTC │
	│ node    │ add -p multinode-899276 -v=5 --alsologtostderr                                                                                                                              │ multinode-899276     │ jenkins │ v1.37.0 │ 27 Dec 25 08:56 UTC │ 27 Dec 25 08:57 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 08:54:37
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 08:54:37.348894   24108 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:54:37.349196   24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:54:37.349207   24108 out.go:374] Setting ErrFile to fd 2...
	I1227 08:54:37.349214   24108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:54:37.349401   24108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:54:37.349901   24108 out.go:368] Setting JSON to false
	I1227 08:54:37.350702   24108 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2227,"bootTime":1766823450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:54:37.350761   24108 start.go:143] virtualization: kvm guest
	I1227 08:54:37.352914   24108 out.go:179] * [multinode-899276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 08:54:37.354122   24108 notify.go:221] Checking for updates...
	I1227 08:54:37.354140   24108 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:54:37.355599   24108 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:54:37.356985   24108 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:54:37.358228   24108 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.359373   24108 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 08:54:37.360648   24108 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:54:37.362069   24108 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:54:37.398292   24108 out.go:179] * Using the kvm2 driver based on user configuration
	I1227 08:54:37.399595   24108 start.go:309] selected driver: kvm2
	I1227 08:54:37.399614   24108 start.go:928] validating driver "kvm2" against <nil>
	I1227 08:54:37.399634   24108 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:54:37.400332   24108 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 08:54:37.400590   24108 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 08:54:37.400626   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:54:37.400682   24108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1227 08:54:37.400692   24108 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 08:54:37.400744   24108 start.go:353] cluster config:
	{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:54:37.400897   24108 iso.go:125] acquiring lock: {Name:mkf3af0a60e6ccee2eeb813de50903ed5d7e8922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 08:54:37.402631   24108 out.go:179] * Starting "multinode-899276" primary control-plane node in "multinode-899276" cluster
	I1227 08:54:37.403816   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:54:37.403844   24108 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 08:54:37.403854   24108 cache.go:65] Caching tarball of preloaded images
	I1227 08:54:37.403951   24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 08:54:37.403967   24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 08:54:37.404346   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:54:37.404374   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json: {Name:mk5e07ed738ae868a23976588c175a8cb2b30a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:54:37.404563   24108 start.go:360] acquireMachinesLock for multinode-899276: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 08:54:37.404598   24108 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "multinode-899276"
	I1227 08:54:37.404622   24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 08:54:37.404675   24108 start.go:125] createHost starting for "" (driver="kvm2")
	I1227 08:54:37.407102   24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 08:54:37.407274   24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
	I1227 08:54:37.407306   24108 client.go:173] LocalClient.Create starting
	I1227 08:54:37.407365   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
	I1227 08:54:37.407409   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:54:37.407425   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:54:37.407478   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
	I1227 08:54:37.407496   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:54:37.407507   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:54:37.407806   24108 main.go:144] libmachine: creating domain...
	I1227 08:54:37.407817   24108 main.go:144] libmachine: creating network...
	I1227 08:54:37.409512   24108 main.go:144] libmachine: found existing default network
	I1227 08:54:37.409702   24108 main.go:144] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.410292   24108 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001caea70}
	I1227 08:54:37.410380   24108 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-multinode-899276</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.416200   24108 main.go:144] libmachine: creating private network mk-multinode-899276 192.168.39.0/24...
	I1227 08:54:37.484690   24108 main.go:144] libmachine: private network mk-multinode-899276 192.168.39.0/24 created
	I1227 08:54:37.484994   24108 main.go:144] libmachine: <network>
	  <name>mk-multinode-899276</name>
	  <uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7e:96:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:54:37.485088   24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
	I1227 08:54:37.485112   24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:54:37.485123   24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.485174   24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
	I1227 08:54:37.708878   24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa...
	I1227 08:54:37.789981   24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk...
	I1227 08:54:37.790024   24108 main.go:144] libmachine: Writing magic tar header
	I1227 08:54:37.790040   24108 main.go:144] libmachine: Writing SSH key tar header
	I1227 08:54:37.790127   24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 ...
	I1227 08:54:37.790183   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276
	I1227 08:54:37.790204   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276 (perms=drwx------)
	I1227 08:54:37.790215   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
	I1227 08:54:37.790225   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
	I1227 08:54:37.790238   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:54:37.790249   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
	I1227 08:54:37.790257   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
	I1227 08:54:37.790265   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
	I1227 08:54:37.790275   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1227 08:54:37.790287   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1227 08:54:37.790303   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1227 08:54:37.790313   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1227 08:54:37.790321   24108 main.go:144] libmachine: checking permissions on dir: /home
	I1227 08:54:37.790330   24108 main.go:144] libmachine: skipping /home - not owner
	I1227 08:54:37.790334   24108 main.go:144] libmachine: defining domain...
	I1227 08:54:37.792061   24108 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>multinode-899276</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:54:37.797217   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:e2:49:84 in network default
	I1227 08:54:37.797913   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:37.797931   24108 main.go:144] libmachine: starting domain...
	I1227 08:54:37.797936   24108 main.go:144] libmachine: ensuring networks are active...
	I1227 08:54:37.798746   24108 main.go:144] libmachine: Ensuring network default is active
	I1227 08:54:37.799132   24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
	I1227 08:54:37.799776   24108 main.go:144] libmachine: getting domain XML...
	I1227 08:54:37.800794   24108 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-899276</name>
	  <uuid>6d370929-9382-4953-8ba6-4fb6eca3e648</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/multinode-899276.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4c:5c:b4'/>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e2:49:84'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:54:39.079279   24108 main.go:144] libmachine: waiting for domain to start...
	I1227 08:54:39.080610   24108 main.go:144] libmachine: domain is now running
	I1227 08:54:39.080624   24108 main.go:144] libmachine: waiting for IP...
	I1227 08:54:39.081451   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.082023   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.082037   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.082336   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.082377   24108 retry.go:84] will retry after 200ms: waiting for domain to come up
	I1227 08:54:39.326020   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.326723   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.326741   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.327098   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.575768   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.576511   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.576534   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.576883   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:39.876331   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:39.877091   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:39.877107   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:39.877413   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:40.370368   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:40.371069   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:40.371086   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:40.371431   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:40.865483   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:40.866211   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:40.866236   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:40.866603   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:41.484623   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:41.485260   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:41.485279   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:41.485638   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:42.393849   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:42.394445   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:42.394463   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:42.394914   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:43.319225   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:43.320003   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:43.320020   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:43.320334   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:44.724122   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:44.724874   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:44.724891   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:44.725237   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:46.322345   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:46.323107   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:46.323130   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:46.323457   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:48.157422   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:48.158091   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:48.158110   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:48.158455   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:51.501875   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:51.502515   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276 (source=lease)
	I1227 08:54:51.502530   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:54:51.502791   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:54:51.502830   24108 retry.go:84] will retry after 4.3s: waiting for domain to come up
	I1227 08:54:55.837835   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:55.838577   24108 main.go:144] libmachine: domain multinode-899276 has current primary IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:55.838596   24108 main.go:144] libmachine: found domain IP: 192.168.39.24
	I1227 08:54:55.838605   24108 main.go:144] libmachine: reserving static IP address...
	I1227 08:54:55.839242   24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276", mac: "52:54:00:4c:5c:b4", ip: "192.168.39.24"} in network mk-multinode-899276
	I1227 08:54:56.025597   24108 main.go:144] libmachine: reserved static IP address 192.168.39.24 for domain multinode-899276
	I1227 08:54:56.025623   24108 main.go:144] libmachine: waiting for SSH...
	I1227 08:54:56.025631   24108 main.go:144] libmachine: Getting to WaitForSSH function...
	I1227 08:54:56.028518   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.029028   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.029077   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.029273   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.029482   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.029494   24108 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1227 08:54:56.143804   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:54:56.144248   24108 main.go:144] libmachine: domain creation complete
	I1227 08:54:56.146013   24108 machine.go:94] provisionDockerMachine start ...
	I1227 08:54:56.148712   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.149157   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.149206   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.149383   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.149565   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.149574   24108 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 08:54:56.263810   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1227 08:54:56.263841   24108 buildroot.go:166] provisioning hostname "multinode-899276"
	I1227 08:54:56.266910   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.267410   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.267435   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.267640   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.267847   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.267858   24108 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-899276 && echo "multinode-899276" | sudo tee /etc/hostname
	I1227 08:54:56.401325   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276
	
	I1227 08:54:56.404664   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.405235   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.405263   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.405433   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.405644   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.405659   24108 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 08:54:56.543193   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:54:56.543230   24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
	I1227 08:54:56.543264   24108 buildroot.go:174] setting up certificates
	I1227 08:54:56.543282   24108 provision.go:84] configureAuth start
	I1227 08:54:56.546171   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.546588   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.546612   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.548760   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.549114   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.549136   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.549243   24108 provision.go:143] copyHostCerts
	I1227 08:54:56.549266   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:54:56.549290   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
	I1227 08:54:56.549298   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:54:56.549370   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
	I1227 08:54:56.549490   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:54:56.549516   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
	I1227 08:54:56.549522   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:54:56.549548   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
	I1227 08:54:56.549593   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:54:56.549609   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
	I1227 08:54:56.549615   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:54:56.549634   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
	I1227 08:54:56.549680   24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-899276]
	I1227 08:54:56.564952   24108 provision.go:177] copyRemoteCerts
	I1227 08:54:56.565003   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 08:54:56.567240   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.567643   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.567677   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.567850   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:56.656198   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 08:54:56.656292   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 08:54:56.685216   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 08:54:56.685304   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1227 08:54:56.714733   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 08:54:56.714819   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 08:54:56.743305   24108 provision.go:87] duration metric: took 199.989326ms to configureAuth
	I1227 08:54:56.743338   24108 buildroot.go:189] setting minikube options for container-runtime
	I1227 08:54:56.743528   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:54:56.746235   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.746587   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.746606   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.746782   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.747027   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.747039   24108 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 08:54:56.861225   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 08:54:56.861255   24108 buildroot.go:70] root file system type: tmpfs
	I1227 08:54:56.861417   24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 08:54:56.864305   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.864731   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.864767   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.864925   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:56.865130   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:56.865170   24108 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 08:54:56.996399   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 08:54:56.999444   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:56.999882   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:56.999912   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.000156   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:57.000379   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:57.000396   24108 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 08:54:57.924795   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1227 08:54:57.924823   24108 machine.go:97] duration metric: took 1.778786884s to provisionDockerMachine
	I1227 08:54:57.924839   24108 client.go:176] duration metric: took 20.517522558s to LocalClient.Create
	I1227 08:54:57.924853   24108 start.go:167] duration metric: took 20.517578026s to libmachine.API.Create "multinode-899276"
	I1227 08:54:57.924862   24108 start.go:293] postStartSetup for "multinode-899276" (driver="kvm2")
	I1227 08:54:57.924874   24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 08:54:57.924962   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 08:54:57.927733   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.928188   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:57.928219   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:57.928364   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.017094   24108 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 08:54:58.021892   24108 info.go:137] Remote host: Buildroot 2025.02
	I1227 08:54:58.021927   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
	I1227 08:54:58.022001   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
	I1227 08:54:58.022108   24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
	I1227 08:54:58.022115   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
	I1227 08:54:58.022194   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 08:54:58.035018   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:54:58.064746   24108 start.go:296] duration metric: took 139.872084ms for postStartSetup
	I1227 08:54:58.067860   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.068279   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.068306   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.068579   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:54:58.068756   24108 start.go:128] duration metric: took 20.664071028s to createHost
	I1227 08:54:58.071566   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.072015   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.072040   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.072244   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:54:58.072473   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1227 08:54:58.072488   24108 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 08:54:58.187322   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825698.156416973
	
	I1227 08:54:58.187344   24108 fix.go:216] guest clock: 1766825698.156416973
	I1227 08:54:58.187351   24108 fix.go:229] Guest: 2025-12-27 08:54:58.156416973 +0000 UTC Remote: 2025-12-27 08:54:58.068766977 +0000 UTC m=+20.766440443 (delta=87.649996ms)
	I1227 08:54:58.187367   24108 fix.go:200] guest clock delta is within tolerance: 87.649996ms
	I1227 08:54:58.187371   24108 start.go:83] releasing machines lock for "multinode-899276", held for 20.782762567s
	I1227 08:54:58.189878   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.190311   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.190336   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.190848   24108 ssh_runner.go:195] Run: cat /version.json
	I1227 08:54:58.190934   24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 08:54:58.193909   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.193920   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194367   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.194393   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194412   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:54:58.194445   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:54:58.194571   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.194749   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:54:58.303202   24108 ssh_runner.go:195] Run: systemctl --version
	I1227 08:54:58.309380   24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 08:54:58.315530   24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 08:54:58.315591   24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 08:54:58.335551   24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 08:54:58.335587   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:54:58.335615   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:54:58.335736   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:54:58.357443   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 08:54:58.369407   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 08:54:58.384702   24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 08:54:58.384807   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 08:54:58.399640   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:54:58.412464   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 08:54:58.424691   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:54:58.437707   24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 08:54:58.450402   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 08:54:58.462916   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 08:54:58.475650   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 08:54:58.493530   24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 08:54:58.504139   24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 08:54:58.504192   24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 08:54:58.516423   24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 08:54:58.528272   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:54:58.673716   24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 08:54:58.720867   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:54:58.720909   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:54:58.720972   24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 08:54:58.744526   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:54:58.764985   24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 08:54:58.785879   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:54:58.803205   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:54:58.821885   24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 08:54:58.856773   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:54:58.873676   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:54:58.896773   24108 ssh_runner.go:195] Run: which cri-dockerd
	I1227 08:54:58.901095   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 08:54:58.912977   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 08:54:58.935679   24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 08:54:59.087073   24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 08:54:59.235233   24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 08:54:59.235368   24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 08:54:59.257291   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:54:59.273342   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:54:59.413736   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:54:59.868087   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 08:54:59.883321   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 08:54:59.898581   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:54:59.913286   24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 08:55:00.062974   24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 08:55:00.214186   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:00.363957   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 08:55:00.400471   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 08:55:00.416741   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:00.560590   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 08:55:00.668182   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:55:00.687244   24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 08:55:00.687326   24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 08:55:00.693883   24108 start.go:574] Will wait 60s for crictl version
	I1227 08:55:00.693968   24108 ssh_runner.go:195] Run: which crictl
	I1227 08:55:00.698083   24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 08:55:00.732884   24108 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1227 08:55:00.732961   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:55:00.764467   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:55:00.793639   24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1227 08:55:00.796490   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:00.796890   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:00.796916   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:00.797129   24108 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 08:55:00.801979   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:55:00.819694   24108 kubeadm.go:884] updating cluster {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 08:55:00.819800   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:55:00.819853   24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 08:55:00.841928   24108 docker.go:694] Got preloaded images: 
	I1227 08:55:00.841951   24108 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0 wasn't preloaded
	I1227 08:55:00.841997   24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 08:55:00.855548   24108 ssh_runner.go:195] Run: which lz4
	I1227 08:55:00.860486   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1227 08:55:00.860594   24108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1227 08:55:00.865387   24108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1227 08:55:00.865417   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284632523 bytes)
	I1227 08:55:01.961740   24108 docker.go:658] duration metric: took 1.101175277s to copy over tarball
	I1227 08:55:01.961831   24108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1227 08:55:03.184079   24108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.222186343s)
	I1227 08:55:03.184117   24108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1227 08:55:03.216811   24108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 08:55:03.229331   24108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1227 08:55:03.250420   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:55:03.266159   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:03.414345   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:55:05.441484   24108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.027089175s)
	I1227 08:55:05.441602   24108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 08:55:05.460483   24108 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 08:55:05.460508   24108 cache_images.go:86] Images are preloaded, skipping loading
	I1227 08:55:05.460517   24108 kubeadm.go:935] updating node { 192.168.39.24 8443 v1.35.0 docker true true} ...
	I1227 08:55:05.460610   24108 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 08:55:05.460667   24108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 08:55:05.512991   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:55:05.513022   24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1227 08:55:05.513043   24108 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 08:55:05.513080   24108 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899276 NodeName:multinode-899276 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 08:55:05.513228   24108 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-899276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 08:55:05.513292   24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 08:55:05.525546   24108 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 08:55:05.525616   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 08:55:05.537237   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1227 08:55:05.557993   24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 08:55:05.579343   24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1227 08:55:05.600550   24108 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1227 08:55:05.605151   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:55:05.620984   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:05.769960   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:55:05.800659   24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.24
	I1227 08:55:05.800681   24108 certs.go:195] generating shared ca certs ...
	I1227 08:55:05.800706   24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.800879   24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
	I1227 08:55:05.800934   24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
	I1227 08:55:05.800949   24108 certs.go:257] generating profile certs ...
	I1227 08:55:05.801012   24108 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key
	I1227 08:55:05.801071   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt with IP's: []
	I1227 08:55:05.940834   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt ...
	I1227 08:55:05.940874   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt: {Name:mk02178aca7f56d432d5f5e37ab494f5434cad17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.941124   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key ...
	I1227 08:55:05.941147   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key: {Name:mk6471e99270ac274eb8d161834a8e74a99ce837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.941271   24108 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d
	I1227 08:55:05.941294   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
	I1227 08:55:05.986153   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d ...
	I1227 08:55:05.986188   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d: {Name:mk802401bb34f0577b94f18188268edd10cab228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.986405   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d ...
	I1227 08:55:05.986426   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d: {Name:mk499be31979f3e860f435493b7a49f6c8a77f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:05.986541   24108 certs.go:382] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt
	I1227 08:55:05.986669   24108 certs.go:386] copying /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key.e254352d -> /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key
	I1227 08:55:05.986770   24108 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key
	I1227 08:55:05.986801   24108 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt with IP's: []
	I1227 08:55:06.117402   24108 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt ...
	I1227 08:55:06.117436   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt: {Name:mkff498d36179d0686c029b1a0d2c2aa28970730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:06.117638   24108 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key ...
	I1227 08:55:06.117659   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key: {Name:mkae01040e0a5553a361620eb1dc3658cbd20bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:06.117774   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 08:55:06.117805   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 08:55:06.117825   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 08:55:06.117845   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 08:55:06.117861   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 08:55:06.117875   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 08:55:06.117888   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 08:55:06.117906   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 08:55:06.117969   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
	W1227 08:55:06.118021   24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
	I1227 08:55:06.118034   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 08:55:06.118087   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
	I1227 08:55:06.118141   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
	I1227 08:55:06.118179   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
	I1227 08:55:06.118236   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:55:06.118294   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.118318   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.118337   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.118857   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 08:55:06.150178   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 08:55:06.179223   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 08:55:06.208476   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 08:55:06.239094   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 08:55:06.268368   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 08:55:06.297730   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 08:55:06.326802   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 08:55:06.357205   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 08:55:06.387582   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
	I1227 08:55:06.417521   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
	I1227 08:55:06.449486   24108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 08:55:06.473842   24108 ssh_runner.go:195] Run: openssl version
	I1227 08:55:06.481673   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.494727   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
	I1227 08:55:06.506605   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.511904   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.511979   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
	I1227 08:55:06.522748   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 08:55:06.535114   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
	I1227 08:55:06.546799   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.558007   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 08:55:06.569782   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.575189   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.575271   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:55:06.582359   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 08:55:06.594977   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 08:55:06.606187   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.617464   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
	I1227 08:55:06.628478   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.633627   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.633684   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
	I1227 08:55:06.640779   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 08:55:06.652579   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
	I1227 08:55:06.663960   24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 08:55:06.668886   24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 08:55:06.668953   24108 kubeadm.go:401] StartCluster: {Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:55:06.669105   24108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 08:55:06.684838   24108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 08:55:06.696256   24108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 08:55:06.708324   24108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 08:55:06.720681   24108 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 08:55:06.720728   24108 kubeadm.go:158] found existing configuration files:
	
	I1227 08:55:06.720787   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 08:55:06.731330   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 08:55:06.731392   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 08:55:06.744324   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 08:55:06.754995   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 08:55:06.755091   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 08:55:06.767513   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 08:55:06.778490   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 08:55:06.778576   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 08:55:06.789929   24108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 08:55:06.800709   24108 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 08:55:06.800794   24108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 08:55:06.812666   24108 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1227 08:55:07.024456   24108 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 08:55:15.975818   24108 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 08:55:15.975905   24108 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 08:55:15.976023   24108 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 08:55:15.976153   24108 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 08:55:15.976280   24108 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 08:55:15.976375   24108 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 08:55:15.977966   24108 out.go:252]   - Generating certificates and keys ...
	I1227 08:55:15.978092   24108 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 08:55:15.978154   24108 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 08:55:15.978227   24108 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 08:55:15.978279   24108 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 08:55:15.978354   24108 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 08:55:15.978437   24108 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 08:55:15.978507   24108 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 08:55:15.978652   24108 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1227 08:55:15.978708   24108 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 08:55:15.978817   24108 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899276] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1227 08:55:15.978879   24108 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 08:55:15.978934   24108 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 08:55:15.979025   24108 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 08:55:15.979124   24108 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 08:55:15.979189   24108 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 08:55:15.979284   24108 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 08:55:15.979376   24108 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 08:55:15.979463   24108 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 08:55:15.979528   24108 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 08:55:15.979667   24108 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 08:55:15.979731   24108 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 08:55:15.981818   24108 out.go:252]   - Booting up control plane ...
	I1227 08:55:15.981903   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 08:55:15.981981   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 08:55:15.982067   24108 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 08:55:15.982163   24108 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 08:55:15.982243   24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 08:55:15.982343   24108 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 08:55:15.982416   24108 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 08:55:15.982468   24108 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 08:55:15.982635   24108 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 08:55:15.982810   24108 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 08:55:15.982906   24108 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001479517s
	I1227 08:55:15.983060   24108 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 08:55:15.983187   24108 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
	I1227 08:55:15.983294   24108 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 08:55:15.983366   24108 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 08:55:15.983434   24108 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508222077s
	I1227 08:55:15.983490   24108 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.795811505s
	I1227 08:55:15.983547   24108 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00280761s
	I1227 08:55:15.983634   24108 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 08:55:15.983743   24108 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 08:55:15.983806   24108 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 08:55:15.983962   24108 kubeadm.go:319] [mark-control-plane] Marking the node multinode-899276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 08:55:15.984029   24108 kubeadm.go:319] [bootstrap-token] Using token: 8gubmu.jzeht1x7riked3vp
	I1227 08:55:15.985339   24108 out.go:252]   - Configuring RBAC rules ...
	I1227 08:55:15.985468   24108 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 08:55:15.985590   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 08:55:15.985836   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 08:55:15.985963   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 08:55:15.986071   24108 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 08:55:15.986140   24108 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 08:55:15.986233   24108 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 08:55:15.986269   24108 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 08:55:15.986315   24108 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 08:55:15.986323   24108 kubeadm.go:319] 
	I1227 08:55:15.986381   24108 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 08:55:15.986390   24108 kubeadm.go:319] 
	I1227 08:55:15.986465   24108 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 08:55:15.986474   24108 kubeadm.go:319] 
	I1227 08:55:15.986507   24108 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 08:55:15.986576   24108 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 08:55:15.986650   24108 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 08:55:15.986662   24108 kubeadm.go:319] 
	I1227 08:55:15.986752   24108 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 08:55:15.986762   24108 kubeadm.go:319] 
	I1227 08:55:15.986803   24108 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 08:55:15.986808   24108 kubeadm.go:319] 
	I1227 08:55:15.986860   24108 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 08:55:15.986924   24108 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 08:55:15.986987   24108 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 08:55:15.986995   24108 kubeadm.go:319] 
	I1227 08:55:15.987083   24108 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 08:55:15.987152   24108 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 08:55:15.987157   24108 kubeadm.go:319] 
	I1227 08:55:15.987230   24108 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
	I1227 08:55:15.987318   24108 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c \
	I1227 08:55:15.987337   24108 kubeadm.go:319] 	--control-plane 
	I1227 08:55:15.987343   24108 kubeadm.go:319] 
	I1227 08:55:15.987420   24108 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 08:55:15.987428   24108 kubeadm.go:319] 
	I1227 08:55:15.987499   24108 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8gubmu.jzeht1x7riked3vp \
	I1227 08:55:15.987622   24108 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c 
	I1227 08:55:15.987640   24108 cni.go:84] Creating CNI manager for ""
	I1227 08:55:15.987649   24108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1227 08:55:15.989869   24108 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 08:55:15.990980   24108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 08:55:15.997094   24108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 08:55:15.997119   24108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 08:55:16.018807   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 08:55:16.327079   24108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 08:55:16.327141   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:16.327146   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276 minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=true
	I1227 08:55:16.365159   24108 ops.go:34] apiserver oom_adj: -16
	I1227 08:55:16.465863   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:16.966866   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:17.466570   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:17.966578   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:18.466519   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:18.966943   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:19.466148   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:19.966252   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:20.466874   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 08:55:20.559551   24108 kubeadm.go:1114] duration metric: took 4.232470194s to wait for elevateKubeSystemPrivileges
	I1227 08:55:20.559594   24108 kubeadm.go:403] duration metric: took 13.890642839s to StartCluster
	I1227 08:55:20.559615   24108 settings.go:142] acquiring lock: {Name:mk44fcba3019847ba7794682dc7fa5d4c6839e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:20.559700   24108 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:55:20.560349   24108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-5516/kubeconfig: {Name:mk9f130990d4b2bd0dfe5788b549d55d90047007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:55:20.560606   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 08:55:20.560624   24108 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 08:55:20.560698   24108 addons.go:70] Setting storage-provisioner=true in profile "multinode-899276"
	I1227 08:55:20.560599   24108 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 08:55:20.560734   24108 addons.go:70] Setting default-storageclass=true in profile "multinode-899276"
	I1227 08:55:20.560754   24108 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "multinode-899276"
	I1227 08:55:20.560889   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:20.560722   24108 addons.go:239] Setting addon storage-provisioner=true in "multinode-899276"
	I1227 08:55:20.560976   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:55:20.563353   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:20.563858   24108 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 08:55:20.563881   24108 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 08:55:20.563887   24108 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 08:55:20.563895   24108 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 08:55:20.563910   24108 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 08:55:20.563922   24108 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 08:55:20.563927   24108 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 08:55:20.564267   24108 addons.go:239] Setting addon default-storageclass=true in "multinode-899276"
	I1227 08:55:20.564296   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:55:20.566001   24108 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 08:55:20.566022   24108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 08:55:20.566660   24108 out.go:179] * Verifying Kubernetes components...
	I1227 08:55:20.566668   24108 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 08:55:20.568005   24108 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 08:55:20.568024   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:55:20.568027   24108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 08:55:20.568764   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.569218   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:20.569253   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.569506   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:55:20.570678   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.571119   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:55:20.571146   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:55:20.571271   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:55:20.721800   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 08:55:20.853268   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:55:21.022237   24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 08:55:21.022257   24108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 08:55:21.456081   24108 start.go:987] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1227 08:55:21.456682   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:21.456749   24108 kapi.go:59] client config for multinode-899276: &rest.Config{Host:"https://192.168.39.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/client.key", CAFile:"/home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 08:55:21.457033   24108 node_ready.go:35] waiting up to 6m0s for node "multinode-899276" to be "Ready" ...
	I1227 08:55:21.828507   24108 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 08:55:21.829821   24108 addons.go:530] duration metric: took 1.269198648s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 08:55:21.962140   24108 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-899276" context rescaled to 1 replicas
	W1227 08:55:23.460520   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:25.461678   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:27.960886   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:30.459943   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:32.460468   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:34.460900   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:36.960939   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	W1227 08:55:39.460258   24108 node_ready.go:57] node "multinode-899276" has "Ready":"False" status (will retry)
	I1227 08:55:40.960160   24108 node_ready.go:49] node "multinode-899276" is "Ready"
	I1227 08:55:40.960196   24108 node_ready.go:38] duration metric: took 19.503123053s for node "multinode-899276" to be "Ready" ...
	I1227 08:55:40.960216   24108 api_server.go:52] waiting for apiserver process to appear ...
	I1227 08:55:40.960272   24108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:55:40.980487   24108 api_server.go:72] duration metric: took 20.419735752s to wait for apiserver process to appear ...
	I1227 08:55:40.980522   24108 api_server.go:88] waiting for apiserver healthz status ...
	I1227 08:55:40.980545   24108 api_server.go:299] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1227 08:55:40.985397   24108 api_server.go:325] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1227 08:55:40.986902   24108 api_server.go:141] control plane version: v1.35.0
	I1227 08:55:40.986929   24108 api_server.go:131] duration metric: took 6.398762ms to wait for apiserver health ...
	I1227 08:55:40.986938   24108 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 08:55:40.990608   24108 system_pods.go:59] 8 kube-system pods found
	I1227 08:55:40.990654   24108 system_pods.go:61] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:40.990664   24108 system_pods.go:61] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:40.990674   24108 system_pods.go:61] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:40.990682   24108 system_pods.go:61] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:40.990688   24108 system_pods.go:61] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:40.990698   24108 system_pods.go:61] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:40.990703   24108 system_pods.go:61] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:40.990715   24108 system_pods.go:61] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:40.990723   24108 system_pods.go:74] duration metric: took 3.778634ms to wait for pod list to return data ...
	I1227 08:55:40.990733   24108 default_sa.go:34] waiting for default service account to be created ...
	I1227 08:55:40.993709   24108 default_sa.go:45] found service account: "default"
	I1227 08:55:40.993729   24108 default_sa.go:55] duration metric: took 2.988456ms for default service account to be created ...
	I1227 08:55:40.993736   24108 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 08:55:40.996625   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:40.996661   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:40.996672   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:40.996683   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:40.996690   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:40.996698   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:40.996709   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:40.996716   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:40.996727   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:40.996757   24108 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 08:55:41.222991   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.223041   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:41.223072   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.223082   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.223088   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.223095   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.223101   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.223107   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.223115   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:41.595420   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.595456   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 08:55:41.595463   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.595468   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.595472   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.595476   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.595479   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.595482   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.595487   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 08:55:41.921377   24108 system_pods.go:86] 8 kube-system pods found
	I1227 08:55:41.921417   24108 system_pods.go:89] "coredns-7d764666f9-952ns" [f0e9a3c2-20bf-4e86-8443-702c47b3e04b] Running
	I1227 08:55:41.921426   24108 system_pods.go:89] "etcd-multinode-899276" [b607aea9-c0e5-408f-91b4-62f71ad01b14] Running
	I1227 08:55:41.921432   24108 system_pods.go:89] "kindnet-mgnsl" [7ca87068-e672-4641-bc6e-b04591e75a10] Running
	I1227 08:55:41.921437   24108 system_pods.go:89] "kube-apiserver-multinode-899276" [df7fedbf-008f-4883-b45c-5b1409fc020b] Running
	I1227 08:55:41.921443   24108 system_pods.go:89] "kube-controller-manager-multinode-899276" [5607fd5f-e6cc-47f7-9422-1ac9a0b235ef] Running
	I1227 08:55:41.921448   24108 system_pods.go:89] "kube-proxy-rrb2x" [a93db4ef-7986-43f9-820c-2b117c90fd1a] Running
	I1227 08:55:41.921453   24108 system_pods.go:89] "kube-scheduler-multinode-899276" [180063ad-aabf-4559-8e21-51fb48798d2b] Running
	I1227 08:55:41.921458   24108 system_pods.go:89] "storage-provisioner" [2dd7f649-dfe6-4a2d-b321-673b664a5d1b] Running
	I1227 08:55:41.921468   24108 system_pods.go:126] duration metric: took 927.725772ms to wait for k8s-apps to be running ...
	I1227 08:55:41.921482   24108 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 08:55:41.921538   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:55:41.943521   24108 system_svc.go:56] duration metric: took 22.03282ms WaitForService to wait for kubelet
	I1227 08:55:41.943547   24108 kubeadm.go:587] duration metric: took 21.382801319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 08:55:41.943563   24108 node_conditions.go:102] verifying NodePressure condition ...
	I1227 08:55:41.946923   24108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1227 08:55:41.946949   24108 node_conditions.go:123] node cpu capacity is 2
	I1227 08:55:41.946964   24108 node_conditions.go:105] duration metric: took 3.396847ms to run NodePressure ...
	I1227 08:55:41.946975   24108 start.go:242] waiting for startup goroutines ...
	I1227 08:55:41.946982   24108 start.go:247] waiting for cluster config update ...
	I1227 08:55:41.946995   24108 start.go:256] writing updated cluster config ...
	I1227 08:55:41.949394   24108 out.go:203] 
	I1227 08:55:41.951062   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:41.951143   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:41.952889   24108 out.go:179] * Starting "multinode-899276-m02" worker node in "multinode-899276" cluster
	I1227 08:55:41.954248   24108 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 08:55:41.954267   24108 cache.go:65] Caching tarball of preloaded images
	I1227 08:55:41.954391   24108 preload.go:251] Found /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 08:55:41.954406   24108 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 08:55:41.954483   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:41.954681   24108 start.go:360] acquireMachinesLock for multinode-899276-m02: {Name:mk0331bc0b7ece2a0c7cd934e8dcec97bcb184a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 08:55:41.954734   24108 start.go:364] duration metric: took 30.88µs to acquireMachinesLock for "multinode-899276-m02"
	I1227 08:55:41.954766   24108 start.go:93] Provisioning new machine with config: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1227 08:55:41.954827   24108 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1227 08:55:41.956569   24108 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 08:55:41.956662   24108 start.go:159] libmachine.API.Create for "multinode-899276" (driver="kvm2")
	I1227 08:55:41.956692   24108 client.go:173] LocalClient.Create starting
	I1227 08:55:41.956761   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem
	I1227 08:55:41.956803   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:55:41.956824   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:55:41.956873   24108 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem
	I1227 08:55:41.956892   24108 main.go:144] libmachine: Decoding PEM data...
	I1227 08:55:41.956910   24108 main.go:144] libmachine: Parsing certificate...
	I1227 08:55:41.957088   24108 main.go:144] libmachine: creating domain...
	I1227 08:55:41.957098   24108 main.go:144] libmachine: creating network...
	I1227 08:55:41.958253   24108 main.go:144] libmachine: found existing default network
	I1227 08:55:41.958505   24108 main.go:144] libmachine: <network connections='1'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:55:41.958687   24108 main.go:144] libmachine: found existing mk-multinode-899276 private network, skipping creation
	I1227 08:55:41.958885   24108 main.go:144] libmachine: <network>
	  <name>mk-multinode-899276</name>
	  <uuid>2519ea81-406e-4441-ae74-8e45c3230355</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7e:96:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	      <host mac='52:54:00:4c:5c:b4' name='multinode-899276' ip='192.168.39.24'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1227 08:55:41.959076   24108 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
	I1227 08:55:41.959099   24108 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:55:41.959107   24108 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:55:41.959186   24108 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22344-5516/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso...
	I1227 08:55:42.180540   24108 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa...
	I1227 08:55:42.254861   24108 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk...
	I1227 08:55:42.254917   24108 main.go:144] libmachine: Writing magic tar header
	I1227 08:55:42.254943   24108 main.go:144] libmachine: Writing SSH key tar header
	I1227 08:55:42.255061   24108 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 ...
	I1227 08:55:42.255137   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02
	I1227 08:55:42.255165   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02 (perms=drwx------)
	I1227 08:55:42.255182   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube/machines
	I1227 08:55:42.255201   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube/machines (perms=drwxr-xr-x)
	I1227 08:55:42.255216   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:55:42.255227   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516/.minikube (perms=drwxr-xr-x)
	I1227 08:55:42.255238   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22344-5516
	I1227 08:55:42.255257   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22344-5516 (perms=drwxrwxr-x)
	I1227 08:55:42.255282   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1227 08:55:42.255298   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1227 08:55:42.255318   24108 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1227 08:55:42.255333   24108 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1227 08:55:42.255348   24108 main.go:144] libmachine: checking permissions on dir: /home
	I1227 08:55:42.255359   24108 main.go:144] libmachine: skipping /home - not owner
	I1227 08:55:42.255363   24108 main.go:144] libmachine: defining domain...
	I1227 08:55:42.256580   24108 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>multinode-899276-m02</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:55:42.265000   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:b3:04:b6 in network default
	I1227 08:55:42.265650   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:42.265669   24108 main.go:144] libmachine: starting domain...
	I1227 08:55:42.265674   24108 main.go:144] libmachine: ensuring networks are active...
	I1227 08:55:42.266690   24108 main.go:144] libmachine: Ensuring network default is active
	I1227 08:55:42.267245   24108 main.go:144] libmachine: Ensuring network mk-multinode-899276 is active
	I1227 08:55:42.267833   24108 main.go:144] libmachine: getting domain XML...
	I1227 08:55:42.269145   24108 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-899276-m02</name>
	  <uuid>08f0927e-00b1-40b5-b768-ac07d0776e28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/multinode-899276-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:9b:0b:64'/>
	      <source network='mk-multinode-899276'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b3:04:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1227 08:55:43.575420   24108 main.go:144] libmachine: waiting for domain to start...
	I1227 08:55:43.576915   24108 main.go:144] libmachine: domain is now running
	I1227 08:55:43.576935   24108 main.go:144] libmachine: waiting for IP...
	I1227 08:55:43.577720   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:43.578257   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:43.578273   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:43.578564   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:43.833127   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:43.833729   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:43.833744   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:43.834083   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.161636   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.162394   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.162413   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.162749   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.477602   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.478263   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.478282   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.478685   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:44.857427   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:44.858004   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:44.858026   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:44.858397   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:45.619396   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:45.619938   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:45.619953   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:45.620268   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:46.214206   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:46.214738   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:46.214760   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:46.215107   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:47.368589   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:47.369148   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:47.369169   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:47.369473   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:48.790105   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:48.790775   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:48.790792   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:48.791137   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:50.057612   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:50.058205   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:50.058230   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:50.058563   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:51.571769   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:51.572501   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:51.572522   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:51.572969   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:54.369906   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:54.370596   24108 main.go:144] libmachine: no network interface addresses found for domain multinode-899276-m02 (source=lease)
	I1227 08:55:54.370610   24108 main.go:144] libmachine: trying to list again with source=arp
	I1227 08:55:54.370961   24108 main.go:144] libmachine: unable to find current IP address of domain multinode-899276-m02 in network mk-multinode-899276 (interfaces detected: [])
	I1227 08:55:57.241023   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.241672   24108 main.go:144] libmachine: domain multinode-899276-m02 has current primary IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.241689   24108 main.go:144] libmachine: found domain IP: 192.168.39.160
	I1227 08:55:57.241696   24108 main.go:144] libmachine: reserving static IP address...
	I1227 08:55:57.242083   24108 main.go:144] libmachine: unable to find host DHCP lease matching {name: "multinode-899276-m02", mac: "52:54:00:9b:0b:64", ip: "192.168.39.160"} in network mk-multinode-899276
	I1227 08:55:57.450637   24108 main.go:144] libmachine: reserved static IP address 192.168.39.160 for domain multinode-899276-m02
	I1227 08:55:57.450661   24108 main.go:144] libmachine: waiting for SSH...
	I1227 08:55:57.450668   24108 main.go:144] libmachine: Getting to WaitForSSH function...
	I1227 08:55:57.453744   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.454265   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.454291   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.454489   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.454732   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.454744   24108 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1227 08:55:57.569604   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:55:57.570099   24108 main.go:144] libmachine: domain creation complete
	I1227 08:55:57.571770   24108 machine.go:94] provisionDockerMachine start ...
	I1227 08:55:57.574152   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.574608   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.574633   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.574862   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.575132   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.575147   24108 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 08:55:57.686687   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1227 08:55:57.686742   24108 buildroot.go:166] provisioning hostname "multinode-899276-m02"
	I1227 08:55:57.689982   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.690439   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.690482   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.690712   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.690987   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.691006   24108 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-899276-m02 && echo "multinode-899276-m02" | sudo tee /etc/hostname
	I1227 08:55:57.825642   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-899276-m02
	
	I1227 08:55:57.828982   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.829434   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.829471   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.829664   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:57.829868   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:57.829883   24108 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899276-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899276-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899276-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 08:55:57.955353   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 08:55:57.955387   24108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22344-5516/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-5516/.minikube}
	I1227 08:55:57.955404   24108 buildroot.go:174] setting up certificates
	I1227 08:55:57.955412   24108 provision.go:84] configureAuth start
	I1227 08:55:57.958329   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.958721   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.958743   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961212   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961604   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:57.961634   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:57.961769   24108 provision.go:143] copyHostCerts
	I1227 08:55:57.961801   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:55:57.961840   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem, removing ...
	I1227 08:55:57.961853   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem
	I1227 08:55:57.961943   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/ca.pem (1078 bytes)
	I1227 08:55:57.962064   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:55:57.962093   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem, removing ...
	I1227 08:55:57.962101   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem
	I1227 08:55:57.962149   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/cert.pem (1123 bytes)
	I1227 08:55:57.962220   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:55:57.962245   24108 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem, removing ...
	I1227 08:55:57.962253   24108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem
	I1227 08:55:57.962290   24108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-5516/.minikube/key.pem (1679 bytes)
	I1227 08:55:57.962357   24108 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem org=jenkins.multinode-899276-m02 san=[127.0.0.1 192.168.39.160 localhost minikube multinode-899276-m02]
	I1227 08:55:58.062355   24108 provision.go:177] copyRemoteCerts
	I1227 08:55:58.062418   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 08:55:58.065702   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.066127   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.066154   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.066319   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:58.156852   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 08:55:58.156925   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 08:55:58.186973   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 08:55:58.187035   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1227 08:55:58.216314   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 08:55:58.216378   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 08:55:58.250146   24108 provision.go:87] duration metric: took 294.721391ms to configureAuth
	I1227 08:55:58.250177   24108 buildroot.go:189] setting minikube options for container-runtime
	I1227 08:55:58.250357   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:55:58.252989   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.253461   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.253487   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.253690   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.253921   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.253934   24108 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 08:55:58.373697   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 08:55:58.373723   24108 buildroot.go:70] root file system type: tmpfs
	I1227 08:55:58.373873   24108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 08:55:58.376713   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.377114   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.377139   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.377329   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.377512   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.377555   24108 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.39.24"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 08:55:58.508330   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.39.24
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 08:55:58.511413   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.511851   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:58.511879   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:58.512069   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:58.512332   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:58.512351   24108 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 08:55:59.431853   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1227 08:55:59.431877   24108 machine.go:97] duration metric: took 1.86008098s to provisionDockerMachine
	I1227 08:55:59.431888   24108 client.go:176] duration metric: took 17.475186189s to LocalClient.Create
	I1227 08:55:59.431902   24108 start.go:167] duration metric: took 17.47524121s to libmachine.API.Create "multinode-899276"
	I1227 08:55:59.431909   24108 start.go:293] postStartSetup for "multinode-899276-m02" (driver="kvm2")
	I1227 08:55:59.431918   24108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 08:55:59.431968   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 08:55:59.434620   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.435132   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.435167   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.435355   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:59.525674   24108 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 08:55:59.530511   24108 info.go:137] Remote host: Buildroot 2025.02
	I1227 08:55:59.530547   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/addons for local assets ...
	I1227 08:55:59.530632   24108 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-5516/.minikube/files for local assets ...
	I1227 08:55:59.530706   24108 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> 94612.pem in /etc/ssl/certs
	I1227 08:55:59.530716   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /etc/ssl/certs/94612.pem
	I1227 08:55:59.530821   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 08:55:59.542821   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:55:59.573575   24108 start.go:296] duration metric: took 141.651568ms for postStartSetup
	I1227 08:55:59.576745   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.577190   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.577225   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.577486   24108 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276/config.json ...
	I1227 08:55:59.577738   24108 start.go:128] duration metric: took 17.622900484s to createHost
	I1227 08:55:59.579881   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.580246   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.580267   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.580524   24108 main.go:144] libmachine: Using SSH client type: native
	I1227 08:55:59.580736   24108 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1227 08:55:59.580748   24108 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 08:55:59.695810   24108 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766825759.656998713
	
	I1227 08:55:59.695838   24108 fix.go:216] guest clock: 1766825759.656998713
	I1227 08:55:59.695847   24108 fix.go:229] Guest: 2025-12-27 08:55:59.656998713 +0000 UTC Remote: 2025-12-27 08:55:59.577753428 +0000 UTC m=+82.275426938 (delta=79.245285ms)
	I1227 08:55:59.695869   24108 fix.go:200] guest clock delta is within tolerance: 79.245285ms
	I1227 08:55:59.695877   24108 start.go:83] releasing machines lock for "multinode-899276-m02", held for 17.741133225s
	I1227 08:55:59.698823   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.699365   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.699403   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.701968   24108 out.go:179] * Found network options:
	I1227 08:55:59.703396   24108 out.go:179]   - NO_PROXY=192.168.39.24
	W1227 08:55:59.704647   24108 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 08:55:59.705042   24108 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 08:55:59.705131   24108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1227 08:55:59.705131   24108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 08:55:59.708339   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708387   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708760   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.708817   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:55:59.708844   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.708889   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:55:59.709024   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:55:59.709228   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	W1227 08:55:59.793520   24108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 08:55:59.793609   24108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 08:55:59.816238   24108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 08:55:59.816269   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:55:59.816301   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:55:59.816397   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:55:59.839936   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 08:55:59.852570   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 08:55:59.865005   24108 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 08:55:59.865103   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 08:55:59.877853   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:55:59.890799   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 08:55:59.903794   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 08:55:59.916281   24108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 08:55:59.929816   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 08:55:59.942187   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 08:55:59.955245   24108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 08:55:59.968552   24108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 08:55:59.979484   24108 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 08:55:59.979563   24108 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 08:55:59.993561   24108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 08:56:00.006240   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:00.152118   24108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 08:56:00.190124   24108 start.go:496] detecting cgroup driver to use...
	I1227 08:56:00.190172   24108 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 08:56:00.190230   24108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 08:56:00.211952   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:56:00.237208   24108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 08:56:00.259010   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 08:56:00.275879   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:56:00.293605   24108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 08:56:00.326414   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 08:56:00.342364   24108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 08:56:00.365931   24108 ssh_runner.go:195] Run: which cri-dockerd
	I1227 08:56:00.370257   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 08:56:00.382716   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 08:56:00.404739   24108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 08:56:00.548335   24108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 08:56:00.689510   24108 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 08:56:00.689570   24108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 08:56:00.729510   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 08:56:00.746884   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:00.890844   24108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 08:56:01.355108   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 08:56:01.370599   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 08:56:01.386540   24108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1227 08:56:01.404096   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:56:01.419794   24108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 08:56:01.561520   24108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 08:56:01.708164   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:01.863090   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 08:56:01.899043   24108 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 08:56:01.915288   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:02.062800   24108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 08:56:02.174498   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 08:56:02.198066   24108 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 08:56:02.198172   24108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 08:56:02.204239   24108 start.go:574] Will wait 60s for crictl version
	I1227 08:56:02.204318   24108 ssh_runner.go:195] Run: which crictl
	I1227 08:56:02.208415   24108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 08:56:02.242462   24108 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1227 08:56:02.242547   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:56:02.272210   24108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 08:56:02.305864   24108 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1227 08:56:02.307155   24108 out.go:179]   - env NO_PROXY=192.168.39.24
	I1227 08:56:02.310958   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:56:02.311334   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:56:02.311356   24108 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:56:02.311519   24108 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 08:56:02.316034   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:56:02.330706   24108 mustload.go:66] Loading cluster: multinode-899276
	I1227 08:56:02.330927   24108 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:56:02.332363   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:56:02.332574   24108 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/multinode-899276 for IP: 192.168.39.160
	I1227 08:56:02.332593   24108 certs.go:195] generating shared ca certs ...
	I1227 08:56:02.332615   24108 certs.go:227] acquiring lock for ca certs: {Name:mk70fce6e604437b1434195361f1f409f08742f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:56:02.332749   24108 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key
	I1227 08:56:02.332808   24108 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key
	I1227 08:56:02.332826   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 08:56:02.332851   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 08:56:02.332871   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 08:56:02.332887   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 08:56:02.332965   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem (1338 bytes)
	W1227 08:56:02.333010   24108 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461_empty.pem, impossibly tiny 0 bytes
	I1227 08:56:02.333027   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 08:56:02.333079   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/ca.pem (1078 bytes)
	I1227 08:56:02.333119   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/cert.pem (1123 bytes)
	I1227 08:56:02.333153   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/key.pem (1679 bytes)
	I1227 08:56:02.333216   24108 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem (1708 bytes)
	I1227 08:56:02.333264   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.333285   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem -> /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.333302   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem -> /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.333328   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 08:56:02.365645   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 08:56:02.395629   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 08:56:02.425519   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 08:56:02.455554   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 08:56:02.486238   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/certs/9461.pem --> /usr/share/ca-certificates/9461.pem (1338 bytes)
	I1227 08:56:02.515842   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/ssl/certs/94612.pem --> /usr/share/ca-certificates/94612.pem (1708 bytes)
	I1227 08:56:02.545758   24108 ssh_runner.go:195] Run: openssl version
	I1227 08:56:02.552395   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.564618   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/94612.pem /etc/ssl/certs/94612.pem
	I1227 08:56:02.577235   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.582685   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:33 /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.582759   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94612.pem
	I1227 08:56:02.590482   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 08:56:02.601896   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/94612.pem /etc/ssl/certs/3ec20f2e.0
	I1227 08:56:02.613606   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.625518   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 08:56:02.637508   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.642823   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.642901   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 08:56:02.650764   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 08:56:02.663547   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 08:56:02.675853   24108 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.688458   24108 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9461.pem /etc/ssl/certs/9461.pem
	I1227 08:56:02.701658   24108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.706958   24108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:33 /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.707033   24108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9461.pem
	I1227 08:56:02.714242   24108 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 08:56:02.726789   24108 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9461.pem /etc/ssl/certs/51391683.0
	I1227 08:56:02.740816   24108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 08:56:02.745870   24108 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 08:56:02.745924   24108 kubeadm.go:935] updating node {m02 192.168.39.160 8443 v1.35.0 docker false true} ...
	I1227 08:56:02.746010   24108 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-899276-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 08:56:02.746115   24108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 08:56:02.758129   24108 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 08:56:02.758244   24108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 08:56:02.770426   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1227 08:56:02.770451   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1227 08:56:02.770474   24108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:56:02.770479   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm -> /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 08:56:02.770428   24108 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1227 08:56:02.770532   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 08:56:02.770547   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl -> /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 08:56:02.770638   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 08:56:02.775599   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 08:56:02.775636   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1227 08:56:02.800423   24108 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet -> /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 08:56:02.800448   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 08:56:02.800474   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1227 08:56:02.800530   24108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 08:56:02.847555   24108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 08:56:02.847596   24108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1227 08:56:03.589571   24108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 08:56:03.603768   24108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1227 08:56:03.631212   24108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 08:56:03.655890   24108 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1227 08:56:03.660915   24108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 08:56:03.680065   24108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 08:56:03.823402   24108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 08:56:03.862307   24108 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:56:03.862561   24108 start.go:318] joinCluster: &{Name:multinode-899276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-899276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:56:03.862676   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1227 08:56:03.865388   24108 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:56:03.865858   24108 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:56:03.865900   24108 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:56:03.866073   24108 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:56:04.026904   24108 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1227 08:56:04.027011   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9k0kod.6geqtmlyqvlg3686 --discovery-token-ca-cert-hash sha256:493e845651b470eb7d698f397abcf644faa5077fb7fa01316f4c06248d5b345c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-899276-m02"
	I1227 08:56:04.959833   24108 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1227 08:56:05.276831   24108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false
	I1227 08:56:05.365119   24108 start.go:320] duration metric: took 1.502556165s to joinCluster
	I1227 08:56:05.367341   24108 out.go:203] 
	W1227 08:56:05.368707   24108 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-899276-m02 minikube.k8s.io/updated_at=2025_12_27T08_56_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=multinode-899276 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-899276-m02" not found
	
	W1227 08:56:05.368724   24108 out.go:285] * 
	W1227 08:56:05.369029   24108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 08:56:05.370349   24108 out.go:203] 
	
	
	==> Docker <==
	Dec 27 08:55:03 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:03.498172293Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 27 08:55:04 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:04.998776948Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.109632332Z" level=info msg="Loading containers: start."
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.247245769Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.377426026Z" level=info msg="Loading containers: done."
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.391637269Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.391811290Z" level=info msg="Initializing buildkit"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.413046081Z" level=info msg="Completed buildkit initialization"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419503264Z" level=info msg="Daemon has completed initialization"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419576305Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419733300Z" level=info msg="API listen on /run/docker.sock"
	Dec 27 08:55:05 multinode-899276 dockerd[1559]: time="2025-12-27T08:55:05.419775153Z" level=info msg="API listen on [::]:2376"
	Dec 27 08:55:05 multinode-899276 systemd[1]: Started Docker Application Container Engine.
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6e78e0ce85e8fe5edb8277132aa64d3c6e7b854ca063f186efe83036788a703/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84314fd3b6e4330cc6b60d3efa4271b1b31c8f7297dbc6f7810f7d4222821a3c/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/01c9987cccbc7847d3b2300457909a1b20a5c3ab68ebdcb2787f46b9223e82fe/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e30cff9be5d8f21e22f56e32fdf4665f38efb1df6a4b4088fd9482e8e3f11b25/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:19 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 27 08:55:21 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d6ec4f5debfedd33fc26996965caee4b0790894833f749df68708096cc935f1/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:21 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b5c9d2f69beb277a5fa8a92c4c1be6942492e1323ecd969f21893fb56053bd2/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:25 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:25Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88: Status: Downloaded newer image for kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"
	Dec 27 08:55:41 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0e0169a737f1b2eff8f1daf82ec9040343a68bccda0dbcd16c6ebd9a120493b2/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:55:41 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bed607b026b4fde1069a1cde835d4fb71c333fa7c430321acf31a9a7b911f0b/resolv.conf as [nameserver 192.168.122.1]"
	Dec 27 08:56:08 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:56:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a14f9917a507456ba9f56d69e0b97a12f8cbd840744c0297fa7a6b0716acb0bf/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 27 08:56:10 multinode-899276 cri-dockerd[1424]: time="2025-12-27T08:56:10Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	63961275984e8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   a14f9917a5074       busybox-769dd8b7dd-p4j54                   default
	6895d0c824741       aa5e3ebc0dfed                                                                                         About a minute ago   Running             coredns                   0                   0e0169a737f1b       coredns-7d764666f9-952ns                   kube-system
	12a2f3326d0f4       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       0                   5bed607b026b4       storage-provisioner                        kube-system
	a7b61d118b3f1       kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae              About a minute ago   Running             kindnet-cni               0                   4b5c9d2f69beb       kindnet-mgnsl                              kube-system
	d50ff81fb41a6       32652ff1bbe6b                                                                                         About a minute ago   Running             kube-proxy                0                   4d6ec4f5debfe       kube-proxy-rrb2x                           kube-system
	806a4f701d170       2c9a4b058bd7e                                                                                         2 minutes ago        Running             kube-controller-manager   0                   e30cff9be5d8f       kube-controller-manager-multinode-899276   kube-system
	8f2fcc85e5e1f       550794e3b12ac                                                                                         2 minutes ago        Running             kube-scheduler            0                   01c9987cccbc7       kube-scheduler-multinode-899276            kube-system
	14fb1b4cc933a       5c6acd67e9cd1                                                                                         2 minutes ago        Running             kube-apiserver            0                   84314fd3b6e43       kube-apiserver-multinode-899276            kube-system
	4ca9b8bb650e0       0a108f7189562                                                                                         2 minutes ago        Running             etcd                      0                   d6e78e0ce85e8       etcd-multinode-899276                      kube-system
	
	
	==> coredns [6895d0c82474] <==
	[INFO] 10.244.0.3:56359 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105017s
	[INFO] 10.244.1.2:36837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104525s
	[INFO] 10.244.1.2:41819 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000155773s
	[INFO] 10.244.1.2:46872 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137145s
	[INFO] 10.244.1.2:60841 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149774s
	[INFO] 10.244.1.2:54807 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069457s
	[INFO] 10.244.1.2:52559 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147505s
	[INFO] 10.244.1.2:42937 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160067s
	[INFO] 10.244.1.2:59092 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088809s
	[INFO] 10.244.0.3:51848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266159s
	[INFO] 10.244.0.3:37790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130917s
	[INFO] 10.244.0.3:39152 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000309509s
	[INFO] 10.244.0.3:45623 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115237s
	[INFO] 10.244.1.2:47231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114267s
	[INFO] 10.244.1.2:42991 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185089s
	[INFO] 10.244.1.2:47992 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126223s
	[INFO] 10.244.1.2:56536 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070142s
	[INFO] 10.244.0.3:47797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109762s
	[INFO] 10.244.0.3:54449 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000142674s
	[INFO] 10.244.0.3:48856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098125s
	[INFO] 10.244.0.3:33666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101743s
	[INFO] 10.244.1.2:56673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124664s
	[INFO] 10.244.1.2:33854 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122427s
	[INFO] 10.244.1.2:51046 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090638s
	[INFO] 10.244.1.2:57142 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106284s
	
	
	==> describe nodes <==
	Name:               multinode-899276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=multinode-899276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T08_55_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 08:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899276
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 08:57:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 08:56:16 +0000   Sat, 27 Dec 2025 08:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 08:56:16 +0000   Sat, 27 Dec 2025 08:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 08:56:16 +0000   Sat, 27 Dec 2025 08:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 08:56:16 +0000   Sat, 27 Dec 2025 08:55:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    multinode-899276
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d370929938249538ba64fb6eca3e648
	  System UUID:                6d370929-9382-4953-8ba6-4fb6eca3e648
	  Boot ID:                    e7571780-ff7a-4d59-887f-f7dbfc0c1beb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-p4j54                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 coredns-7d764666f9-952ns                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     119s
	  kube-system                 etcd-multinode-899276                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m6s
	  kube-system                 kindnet-mgnsl                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      119s
	  kube-system                 kube-apiserver-multinode-899276             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-multinode-899276    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-rrb2x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-multinode-899276             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  2m    node-controller  Node multinode-899276 event: Registered Node multinode-899276 in Controller
	
	
	Name:               multinode-899276-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899276-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 08:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899276-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 08:57:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 08:56:36 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 08:56:36 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 08:56:36 +0000   Sat, 27 Dec 2025 08:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 08:56:36 +0000   Sat, 27 Dec 2025 08:56:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    multinode-899276-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 08f0927e00b140b5b768ac07d0776e28
	  System UUID:                08f0927e-00b1-40b5-b768-ac07d0776e28
	  Boot ID:                    1d4ac048-9867-48e6-96eb-9e9bc0666768
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-pjzv6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kindnet-4pk8r               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      74s
	  kube-system                 kube-proxy-xhrn8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  70s   node-controller  Node multinode-899276-m02 event: Registered Node multinode-899276-m02 in Controller
	
	
	Name:               multinode-899276-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899276-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=multinode-899276
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T08_56_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 08:56:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899276-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 08:57:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 08:57:18 +0000   Sat, 27 Dec 2025 08:56:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 08:57:18 +0000   Sat, 27 Dec 2025 08:56:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 08:57:18 +0000   Sat, 27 Dec 2025 08:56:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 08:57:18 +0000   Sat, 27 Dec 2025 08:57:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    multinode-899276-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 972e832be78846e48b040d2a85fa2348
	  System UUID:                972e832b-e788-46e4-8b04-0d2a85fa2348
	  Boot ID:                    4dbda481-f664-488f-9c59-40112e7d01b0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-w92lh       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      23s
	  kube-system                 kube-proxy-lzbwq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  20s   node-controller  Node multinode-899276-m03 event: Registered Node multinode-899276-m03 in Controller
	
	
	==> dmesg <==
	[Dec27 08:54] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001306] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.170243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.117819] kauditd_printk_skb: 1 callbacks suppressed
	[Dec27 08:55] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.102827] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.160897] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.244934] kauditd_printk_skb: 18 callbacks suppressed
	[  +4.325682] kauditd_printk_skb: 165 callbacks suppressed
	[ +14.621191] kauditd_printk_skb: 2 callbacks suppressed
	[Dec27 08:56] kauditd_printk_skb: 90 callbacks suppressed
	
	
	==> etcd [4ca9b8bb650e] <==
	{"level":"info","ts":"2025-12-27T08:55:10.785974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"602226ed500416f5 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T08:55:10.786004Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2025-12-27T08:55:10.791476Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.793884Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:multinode-899276 ClientURLs:[https://192.168.39.24:2379]}","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T08:55:10.794043Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T08:55:10.793909Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T08:55:10.795404Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T08:55:10.799763Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T08:55:10.802567Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T08:55:10.802644Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T08:55:10.804819Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.805072Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.805735Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T08:55:10.805926Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T08:55:10.807174Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T08:55:10.815395Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2025-12-27T08:55:10.816576Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-12-27T08:56:04.877177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.572626ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T08:56:04.877302Z","caller":"traceutil/trace.go:172","msg":"trace[1557608848] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:454; }","duration":"205.776267ms","start":"2025-12-27T08:56:04.671511Z","end":"2025-12-27T08:56:04.877287Z","steps":["trace[1557608848] 'range keys from in-memory index tree'  (duration: 205.559438ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T08:56:04.877487Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.4767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T08:56:04.877538Z","caller":"traceutil/trace.go:172","msg":"trace[1875828016] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:454; }","duration":"245.507791ms","start":"2025-12-27T08:56:04.631992Z","end":"2025-12-27T08:56:04.877500Z","steps":["trace[1875828016] 'agreement among raft nodes before linearized reading'  (duration: 92.674358ms)","trace[1875828016] 'range keys from in-memory index tree'  (duration: 152.742931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T08:56:04.878377Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.056298ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654399270533750011 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-tz6w5\" mod_revision:454 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-tz6w5\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-tz6w5\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-27T08:56:04.878902Z","caller":"traceutil/trace.go:172","msg":"trace[1051096777] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"247.420304ms","start":"2025-12-27T08:56:04.631468Z","end":"2025-12-27T08:56:04.878888Z","steps":["trace[1051096777] 'process raft request'  (duration: 93.279326ms)","trace[1051096777] 'compare'  (duration: 152.870907ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T08:56:56.097153Z","caller":"traceutil/trace.go:172","msg":"trace[73003346] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"329.830325ms","start":"2025-12-27T08:56:55.767308Z","end":"2025-12-27T08:56:56.097139Z","steps":["trace[73003346] 'process raft request'  (duration: 324.656674ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T08:56:56.097288Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-27T08:56:55.767258Z","time spent":"329.978289ms","remote":"127.0.0.1:58566","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2346,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-rl7qr\" mod_revision:589 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-rl7qr\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-rl7qr\" > >"}
	
	
	==> kernel <==
	 08:57:19 up 2 min,  0 users,  load average: 0.38, 0.32, 0.13
	Linux multinode-899276 6.6.95 #1 SMP PREEMPT_DYNAMIC Fri Dec 26 06:43:12 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [a7b61d118b3f] <==
	I1227 08:56:36.214794       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:56:36.214861       1 main.go:301] handling current node
	I1227 08:56:36.214881       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1227 08:56:36.214886       1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24] 
	I1227 08:56:46.221791       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:56:46.221846       1 main.go:301] handling current node
	I1227 08:56:46.221861       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1227 08:56:46.221867       1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24] 
	I1227 08:56:56.222872       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1227 08:56:56.222925       1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24] 
	I1227 08:56:56.223577       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:56:56.223610       1 main.go:301] handling current node
	I1227 08:57:06.215343       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:57:06.215394       1 main.go:301] handling current node
	I1227 08:57:06.215410       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1227 08:57:06.215416       1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24] 
	I1227 08:57:06.216048       1 main.go:297] Handling node with IPs: map[192.168.39.23:{}]
	I1227 08:57:06.216077       1 main.go:324] Node multinode-899276-m03 has CIDR [10.244.2.0/24] 
	I1227 08:57:06.216511       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.23 Flags: [] Table: 0 Realm: 0} 
	I1227 08:57:16.217317       1 main.go:297] Handling node with IPs: map[192.168.39.24:{}]
	I1227 08:57:16.217403       1 main.go:301] handling current node
	I1227 08:57:16.217426       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1227 08:57:16.217431       1 main.go:324] Node multinode-899276-m02 has CIDR [10.244.1.0/24] 
	I1227 08:57:16.217978       1 main.go:297] Handling node with IPs: map[192.168.39.23:{}]
	I1227 08:57:16.218052       1 main.go:324] Node multinode-899276-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [14fb1b4cc933] <==
	I1227 08:55:13.225752       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 08:55:13.980887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 08:55:14.037402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 08:55:14.121453       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 08:55:14.128526       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.24]
	I1227 08:55:14.129442       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 08:55:14.135088       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 08:55:14.269225       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 08:55:15.386610       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 08:55:15.428640       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 08:55:15.441371       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 08:55:19.919728       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 08:55:20.223365       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 08:55:20.228936       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 08:55:20.270234       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 08:56:30.990137       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42700: use of closed network connection
	E1227 08:56:31.176396       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42728: use of closed network connection
	E1227 08:56:31.382389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42752: use of closed network connection
	E1227 08:56:31.577102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42772: use of closed network connection
	E1227 08:56:31.761366       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42784: use of closed network connection
	E1227 08:56:31.948786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42802: use of closed network connection
	E1227 08:56:32.269426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42830: use of closed network connection
	E1227 08:56:32.463854       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42834: use of closed network connection
	E1227 08:56:32.648216       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42840: use of closed network connection
	E1227 08:56:32.834239       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:42850: use of closed network connection
	
	
	==> kube-controller-manager [806a4f701d17] <==
	I1227 08:55:19.155501       1 range_allocator.go:177] "Sending events to api server"
	I1227 08:55:19.155519       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.155544       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 08:55:19.155550       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 08:55:19.155554       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.155636       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.167077       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276" podCIDRs=["10.244.0.0/24"]
	I1227 08:55:19.172639       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.176175       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.176447       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.179607       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.196290       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.208898       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:19.208913       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 08:55:19.208917       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 08:55:44.094465       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 08:56:05.429119       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899276-m02\" does not exist"
	I1227 08:56:05.458174       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276-m02" podCIDRs=["10.244.1.0/24"]
	I1227 08:56:09.099218       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-899276-m02"
	I1227 08:56:27.199945       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899276-m02"
	I1227 08:56:56.302474       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899276-m02"
	I1227 08:56:56.304514       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-899276-m03\" does not exist"
	I1227 08:56:56.312418       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-899276-m03" podCIDRs=["10.244.2.0/24"]
	I1227 08:56:59.121951       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-899276-m03"
	I1227 08:57:18.022819       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-899276-m02"
	
	
	==> kube-proxy [d50ff81fb41a] <==
	I1227 08:55:21.628068       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 08:55:21.731947       1 shared_informer.go:377] "Caches are synced"
	I1227 08:55:21.731996       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1227 08:55:21.739671       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 08:55:21.830226       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1227 08:55:21.830342       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1227 08:55:21.830404       1 server_linux.go:136] "Using iptables Proxier"
	I1227 08:55:21.839592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 08:55:21.840293       1 server.go:529] "Version info" version="v1.35.0"
	I1227 08:55:21.840321       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 08:55:21.842846       1 config.go:200] "Starting service config controller"
	I1227 08:55:21.842864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 08:55:21.842880       1 config.go:106] "Starting endpoint slice config controller"
	I1227 08:55:21.842884       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 08:55:21.842909       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 08:55:21.842915       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 08:55:21.846740       1 config.go:309] "Starting node config controller"
	I1227 08:55:21.846890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 08:55:21.942963       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 08:55:21.943020       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 08:55:21.943138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 08:55:21.948504       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [8f2fcc85e5e1] <==
	E1227 08:55:12.377527       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 08:55:12.379893       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 08:55:12.380089       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 08:55:12.380428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 08:55:12.381099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 08:55:12.381174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 08:55:12.384043       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 08:55:12.384255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 08:55:13.242305       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 08:55:13.257422       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 08:55:13.303156       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 08:55:13.319157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 08:55:13.362023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 08:55:13.362795       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 08:55:13.411755       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 08:55:13.420451       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 08:55:13.431365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 08:55:13.480845       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 08:55:13.542450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 08:55:13.554908       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 08:55:13.560944       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 08:55:13.650997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 08:55:13.693380       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 08:55:13.694477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1227 08:55:16.332120       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 08:55:22 multinode-899276 kubelet[2549]: I1227 08:55:22.182334    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rrb2x" podStartSLOduration=2.182320518 podStartE2EDuration="2.182320518s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:21.591305433 +0000 UTC m=+6.370036755" watchObservedRunningTime="2025-12-27 08:55:22.182320518 +0000 UTC m=+6.961051864"
	Dec 27 08:55:23 multinode-899276 kubelet[2549]: E1227 08:55:23.868801    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
	Dec 27 08:55:24 multinode-899276 kubelet[2549]: E1227 08:55:24.280199    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
	Dec 27 08:55:26 multinode-899276 kubelet[2549]: I1227 08:55:26.685630    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-mgnsl" podStartSLOduration=2.79150236 podStartE2EDuration="6.685618144s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="2025-12-27 08:55:21.301251881 +0000 UTC m=+6.079983198" lastFinishedPulling="2025-12-27 08:55:25.195367666 +0000 UTC m=+9.974098982" observedRunningTime="2025-12-27 08:55:26.683876008 +0000 UTC m=+11.462607343" watchObservedRunningTime="2025-12-27 08:55:26.685618144 +0000 UTC m=+11.464349467"
	Dec 27 08:55:28 multinode-899276 kubelet[2549]: E1227 08:55:28.767005    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-multinode-899276" containerName="kube-controller-manager"
	Dec 27 08:55:32 multinode-899276 kubelet[2549]: E1227 08:55:32.167933    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-899276" containerName="etcd"
	Dec 27 08:55:33 multinode-899276 kubelet[2549]: E1227 08:55:33.875439    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
	Dec 27 08:55:34 multinode-899276 kubelet[2549]: E1227 08:55:34.286744    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.671822    2549 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789814    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2dd7f649-dfe6-4a2d-b321-673b664a5d1b-tmp\") pod \"storage-provisioner\" (UID: \"2dd7f649-dfe6-4a2d-b321-673b664a5d1b\") " pod="kube-system/storage-provisioner"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789865    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9a3c2-20bf-4e86-8443-702c47b3e04b-config-volume\") pod \"coredns-7d764666f9-952ns\" (UID: \"f0e9a3c2-20bf-4e86-8443-702c47b3e04b\") " pod="kube-system/coredns-7d764666f9-952ns"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789892    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pql\" (UniqueName: \"kubernetes.io/projected/f0e9a3c2-20bf-4e86-8443-702c47b3e04b-kube-api-access-l7pql\") pod \"coredns-7d764666f9-952ns\" (UID: \"f0e9a3c2-20bf-4e86-8443-702c47b3e04b\") " pod="kube-system/coredns-7d764666f9-952ns"
	Dec 27 08:55:40 multinode-899276 kubelet[2549]: I1227 08:55:40.789911    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcsm\" (UniqueName: \"kubernetes.io/projected/2dd7f649-dfe6-4a2d-b321-673b664a5d1b-kube-api-access-pxcsm\") pod \"storage-provisioner\" (UID: \"2dd7f649-dfe6-4a2d-b321-673b664a5d1b\") " pod="kube-system/storage-provisioner"
	Dec 27 08:55:41 multinode-899276 kubelet[2549]: E1227 08:55:41.773849    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	Dec 27 08:55:41 multinode-899276 kubelet[2549]: I1227 08:55:41.819800    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-952ns" podStartSLOduration=21.81978365 podStartE2EDuration="21.81978365s" podCreationTimestamp="2025-12-27 08:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:41.799893618 +0000 UTC m=+26.578624941" watchObservedRunningTime="2025-12-27 08:55:41.81978365 +0000 UTC m=+26.598514973"
	Dec 27 08:55:42 multinode-899276 kubelet[2549]: E1227 08:55:42.792462    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	Dec 27 08:55:43 multinode-899276 kubelet[2549]: E1227 08:55:43.808397    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	Dec 27 08:56:07 multinode-899276 kubelet[2549]: I1227 08:56:07.526489    2549 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=46.526426849 podStartE2EDuration="46.526426849s" podCreationTimestamp="2025-12-27 08:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 08:55:41.8558014 +0000 UTC m=+26.634532724" watchObservedRunningTime="2025-12-27 08:56:07.526426849 +0000 UTC m=+52.305158172"
	Dec 27 08:56:07 multinode-899276 kubelet[2549]: I1227 08:56:07.594923    2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcvx\" (UniqueName: \"kubernetes.io/projected/4b784fd0-47bb-413f-9344-7abf0389d17a-kube-api-access-glcvx\") pod \"busybox-769dd8b7dd-p4j54\" (UID: \"4b784fd0-47bb-413f-9344-7abf0389d17a\") " pod="default/busybox-769dd8b7dd-p4j54"
	Dec 27 08:56:08 multinode-899276 kubelet[2549]: I1227 08:56:08.075004    2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14f9917a507456ba9f56d69e0b97a12f8cbd840744c0297fa7a6b0716acb0bf"
	Dec 27 08:56:39 multinode-899276 kubelet[2549]: E1227 08:56:39.367175    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-899276" containerName="kube-scheduler"
	Dec 27 08:56:53 multinode-899276 kubelet[2549]: E1227 08:56:53.366994    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-899276" containerName="etcd"
	Dec 27 08:56:54 multinode-899276 kubelet[2549]: E1227 08:56:54.366448    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-899276" containerName="kube-apiserver"
	Dec 27 08:56:58 multinode-899276 kubelet[2549]: E1227 08:56:58.365479    2549 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-multinode-899276" containerName="kube-controller-manager"
	Dec 27 08:57:07 multinode-899276 kubelet[2549]: E1227 08:57:07.366205    2549 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-952ns" containerName="coredns"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-899276 -n multinode-899276
helpers_test.go:270: (dbg) Run:  kubectl --context multinode-899276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiNode/serial/MultiNodeLabels FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/MultiNodeLabels (1.58s)

                                                
                                    

Test pass (334/370)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.38
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.35.0/json-events 3.03
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.16
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.64
22 TestOffline 100.97
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 142.32
29 TestAddons/serial/Volcano 43.32
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.54
35 TestAddons/parallel/Registry 15.89
36 TestAddons/parallel/RegistryCreds 0.57
37 TestAddons/parallel/Ingress 20.01
38 TestAddons/parallel/InspektorGadget 11.65
39 TestAddons/parallel/MetricsServer 6.77
41 TestAddons/parallel/CSI 35.75
42 TestAddons/parallel/Headlamp 22.2
43 TestAddons/parallel/CloudSpanner 6.68
44 TestAddons/parallel/LocalPath 56.71
45 TestAddons/parallel/NvidiaDevicePlugin 6.53
46 TestAddons/parallel/Yakd 12.51
48 TestAddons/StoppedEnableDisable 14.25
49 TestCertOptions 48.51
50 TestCertExpiration 310.62
51 TestDockerFlags 62.84
52 TestForceSystemdFlag 80.9
53 TestForceSystemdEnv 102.18
58 TestErrorSpam/setup 36.9
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.73
61 TestErrorSpam/pause 1.31
62 TestErrorSpam/unpause 1.55
63 TestErrorSpam/stop 6.45
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.94
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 58.44
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.17
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0.99
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 52.26
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.99
86 TestFunctional/serial/LogsFileCmd 1
87 TestFunctional/serial/InvalidService 4.45
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 14.9
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.89
97 TestFunctional/parallel/ServiceCmdConnect 28.54
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 41
101 TestFunctional/parallel/SSHCmd 0.36
102 TestFunctional/parallel/CpCmd 1.11
103 TestFunctional/parallel/MySQL 43.71
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.11
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
113 TestFunctional/parallel/License 0.38
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.42
116 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
126 TestFunctional/parallel/ServiceCmd/List 0.44
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
129 TestFunctional/parallel/ServiceCmd/Format 0.6
130 TestFunctional/parallel/ServiceCmd/URL 0.39
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
134 TestFunctional/parallel/DockerEnv/bash 0.78
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
136 TestFunctional/parallel/ProfileCmd/profile_list 0.36
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
138 TestFunctional/parallel/MountCmd/any-port 15.87
139 TestFunctional/parallel/MountCmd/specific-port 1.59
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.24
145 TestFunctional/parallel/ImageCommands/Setup 0.97
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 231.64
161 TestMultiControlPlane/serial/StartCluster 213.51
162 TestMultiControlPlane/serial/DeployApp 6.39
163 TestMultiControlPlane/serial/PingHostFromPods 1.39
164 TestMultiControlPlane/serial/AddWorkerNode 47.56
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
167 TestMultiControlPlane/serial/CopyFile 10.93
168 TestMultiControlPlane/serial/StopSecondaryNode 15.54
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
170 TestMultiControlPlane/serial/RestartSecondaryNode 25.69
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 155.17
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.45
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
175 TestMultiControlPlane/serial/StopCluster 42.96
176 TestMultiControlPlane/serial/RestartCluster 109.68
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
178 TestMultiControlPlane/serial/AddSecondaryNode 102.44
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
182 TestImageBuild/serial/Setup 37.04
183 TestImageBuild/serial/NormalBuild 1.48
184 TestImageBuild/serial/BuildWithBuildArg 1.02
185 TestImageBuild/serial/BuildWithDockerIgnore 0.93
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.82
191 TestJSONOutput/start/Command 82.92
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.65
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.59
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 11.81
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.23
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 79.62
223 TestMountStart/serial/StartWithMountFirst 20.29
224 TestMountStart/serial/VerifyMountFirst 0.29
225 TestMountStart/serial/StartWithMountSecond 21.17
226 TestMountStart/serial/VerifyMountSecond 0.3
227 TestMountStart/serial/DeleteFirst 0.7
228 TestMountStart/serial/VerifyMountPostDelete 0.29
229 TestMountStart/serial/Stop 1.27
230 TestMountStart/serial/RestartStopped 19.4
231 TestMountStart/serial/VerifyMountPostStop 0.31
235 TestMultiNode/serial/DeployApp2Nodes 24.65
236 TestMultiNode/serial/PingHostFrom2Pods 0.89
237 TestMultiNode/serial/AddNode 45.74
239 TestMultiNode/serial/ProfileList 0.48
240 TestMultiNode/serial/CopyFile 6.04
241 TestMultiNode/serial/StopNode 2.52
242 TestMultiNode/serial/StartAfterStop 43.81
243 TestMultiNode/serial/RestartKeepsNodes 208.1
244 TestMultiNode/serial/DeleteNode 2.15
245 TestMultiNode/serial/StopMultiNode 23.86
246 TestMultiNode/serial/RestartMultiNode 87.29
247 TestMultiNode/serial/ValidateNameConflict 38.96
254 TestScheduledStopUnix 108.85
255 TestSkaffold 118.98
258 TestRunningBinaryUpgrade 352.17
260 TestKubernetesUpgrade 186.02
271 TestISOImage/Setup 22.53
275 TestISOImage/Binaries/crictl 0.19
276 TestISOImage/Binaries/curl 0.17
277 TestISOImage/Binaries/docker 0.17
278 TestISOImage/Binaries/git 0.17
279 TestISOImage/Binaries/iptables 0.18
280 TestISOImage/Binaries/podman 0.16
281 TestISOImage/Binaries/rsync 0.18
282 TestISOImage/Binaries/socat 0.17
283 TestISOImage/Binaries/wget 0.17
284 TestISOImage/Binaries/VBoxControl 0.19
285 TestISOImage/Binaries/VBoxService 0.19
293 TestStoppedBinaryUpgrade/Setup 0.73
294 TestStoppedBinaryUpgrade/Upgrade 93.68
295 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
297 TestPause/serial/Start 83.12
298 TestPreload/Start-NoPreload-PullImage 126.92
300 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
301 TestNoKubernetes/serial/StartWithK8s 39.6
302 TestPause/serial/SecondStartNoReconfiguration 70.02
303 TestNoKubernetes/serial/StartWithStopK8s 15.21
304 TestPreload/Restart-With-Preload-Check-User-Image 48.89
305 TestNoKubernetes/serial/Start 31.82
306 TestPause/serial/Pause 0.61
307 TestPause/serial/VerifyStatus 0.25
308 TestPause/serial/Unpause 0.67
309 TestPause/serial/PauseAgain 0.83
310 TestPause/serial/DeletePaused 0.9
311 TestPause/serial/VerifyDeletedResources 0.55
312 TestNetworkPlugins/group/auto/Start 91.21
313 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
314 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
315 TestNoKubernetes/serial/ProfileList 1.58
316 TestNoKubernetes/serial/Stop 1.38
317 TestNoKubernetes/serial/StartNoArgs 34.47
319 TestNetworkPlugins/group/kindnet/Start 90.92
320 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
321 TestNetworkPlugins/group/calico/Start 105.57
322 TestNetworkPlugins/group/auto/KubeletFlags 0.23
323 TestNetworkPlugins/group/auto/NetCatPod 13.31
324 TestNetworkPlugins/group/auto/DNS 0.18
325 TestNetworkPlugins/group/auto/Localhost 0.14
326 TestNetworkPlugins/group/auto/HairPin 0.15
327 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
329 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
330 TestNetworkPlugins/group/custom-flannel/Start 61.53
331 TestNetworkPlugins/group/false/Start 85.06
332 TestNetworkPlugins/group/kindnet/DNS 0.2
333 TestNetworkPlugins/group/kindnet/Localhost 0.15
334 TestNetworkPlugins/group/kindnet/HairPin 0.18
335 TestNetworkPlugins/group/enable-default-cni/Start 104.37
336 TestNetworkPlugins/group/calico/ControllerPod 6.01
337 TestNetworkPlugins/group/calico/KubeletFlags 0.18
338 TestNetworkPlugins/group/calico/NetCatPod 13.29
339 TestNetworkPlugins/group/calico/DNS 0.19
340 TestNetworkPlugins/group/calico/Localhost 0.19
341 TestNetworkPlugins/group/calico/HairPin 0.18
342 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
343 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
344 TestNetworkPlugins/group/flannel/Start 64.3
345 TestNetworkPlugins/group/custom-flannel/DNS 0.22
346 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
347 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
348 TestNetworkPlugins/group/false/KubeletFlags 0.23
349 TestNetworkPlugins/group/false/NetCatPod 11.33
350 TestNetworkPlugins/group/bridge/Start 91.86
351 TestNetworkPlugins/group/false/DNS 0.18
352 TestNetworkPlugins/group/false/Localhost 0.15
353 TestNetworkPlugins/group/false/HairPin 0.15
354 TestNetworkPlugins/group/kubenet/Start 88.64
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
357 TestNetworkPlugins/group/flannel/ControllerPod 6.01
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
362 TestNetworkPlugins/group/flannel/NetCatPod 13.32
363 TestNetworkPlugins/group/flannel/DNS 0.23
364 TestNetworkPlugins/group/flannel/Localhost 0.17
365 TestNetworkPlugins/group/flannel/HairPin 0.21
367 TestStartStop/group/old-k8s-version/serial/FirstStart 97.62
369 TestStartStop/group/no-preload/serial/FirstStart 100.23
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
371 TestNetworkPlugins/group/bridge/NetCatPod 14.31
372 TestNetworkPlugins/group/bridge/DNS 0.18
373 TestNetworkPlugins/group/bridge/Localhost 0.14
374 TestNetworkPlugins/group/bridge/HairPin 0.14
375 TestNetworkPlugins/group/kubenet/KubeletFlags 0.2
376 TestNetworkPlugins/group/kubenet/NetCatPod 12.27
378 TestStartStop/group/embed-certs/serial/FirstStart 84.41
379 TestNetworkPlugins/group/kubenet/DNS 0.16
380 TestNetworkPlugins/group/kubenet/Localhost 0.2
381 TestNetworkPlugins/group/kubenet/HairPin 0.17
383 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.03
384 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
385 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
386 TestStartStop/group/old-k8s-version/serial/Stop 12.68
387 TestStartStop/group/no-preload/serial/DeployApp 9.37
388 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
389 TestStartStop/group/old-k8s-version/serial/SecondStart 51.67
390 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
391 TestStartStop/group/no-preload/serial/Stop 13.99
392 TestStartStop/group/embed-certs/serial/DeployApp 9.3
393 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
394 TestStartStop/group/no-preload/serial/SecondStart 44.63
395 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
396 TestStartStop/group/embed-certs/serial/Stop 14.89
397 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.43
398 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
399 TestStartStop/group/embed-certs/serial/SecondStart 52.51
400 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
401 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.6
402 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
403 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
404 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
405 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
406 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.8
407 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
409 TestStartStop/group/old-k8s-version/serial/Pause 3.02
410 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
411 TestStartStop/group/no-preload/serial/Pause 3.06
413 TestStartStop/group/newest-cni/serial/FirstStart 62.63
414 TestPreload/PreloadSrc/gcs 3.32
415 TestPreload/PreloadSrc/github 4.21
416 TestPreload/PreloadSrc/gcs-cached 1.06
418 TestISOImage/PersistentMounts//data 0.2
419 TestISOImage/PersistentMounts//var/lib/docker 0.2
420 TestISOImage/PersistentMounts//var/lib/cni 0.21
421 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
422 TestISOImage/PersistentMounts//var/lib/minikube 0.18
423 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
424 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
425 TestISOImage/VersionJSON 0.2
426 TestISOImage/eBPFSupport 0.18
427 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
428 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
429 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
430 TestStartStop/group/embed-certs/serial/Pause 2.97
431 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
432 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
433 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
434 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
435 TestStartStop/group/newest-cni/serial/DeployApp 0
436 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
437 TestStartStop/group/newest-cni/serial/Stop 14.31
438 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
439 TestStartStop/group/newest-cni/serial/SecondStart 29.38
440 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
441 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
442 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
443 TestStartStop/group/newest-cni/serial/Pause 2.68
x
+
TestDownloadOnly/v1.28.0/json-events (7.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-733339 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-733339 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (7.377056421s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 08:27:45.135266    9461 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1227 08:27:45.135367    9461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-733339
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-733339: exit status 85 (70.35123ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-733339 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-733339 │ jenkins │ v1.37.0 │ 27 Dec 25 08:27 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 08:27:37
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 08:27:37.811452    9473 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:27:37.811697    9473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:27:37.811707    9473 out.go:374] Setting ErrFile to fd 2...
	I1227 08:27:37.811714    9473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:27:37.811908    9473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	W1227 08:27:37.812118    9473 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22344-5516/.minikube/config/config.json: open /home/jenkins/minikube-integration/22344-5516/.minikube/config/config.json: no such file or directory
	I1227 08:27:37.812646    9473 out.go:368] Setting JSON to true
	I1227 08:27:37.813522    9473 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":608,"bootTime":1766823450,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:27:37.813584    9473 start.go:143] virtualization: kvm guest
	I1227 08:27:37.818144    9473 out.go:99] [download-only-733339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1227 08:27:37.818266    9473 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 08:27:37.818320    9473 notify.go:221] Checking for updates...
	I1227 08:27:37.819709    9473 out.go:171] MINIKUBE_LOCATION=22344
	I1227 08:27:37.821384    9473 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:27:37.822725    9473 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:27:37.824246    9473 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:27:37.826346    9473 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1227 08:27:37.829019    9473 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 08:27:37.829335    9473 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:27:38.357989    9473 out.go:99] Using the kvm2 driver based on user configuration
	I1227 08:27:38.358027    9473 start.go:309] selected driver: kvm2
	I1227 08:27:38.358034    9473 start.go:928] validating driver "kvm2" against <nil>
	I1227 08:27:38.358495    9473 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 08:27:38.359240    9473 start_flags.go:417] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1227 08:27:38.359441    9473 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 08:27:38.359479    9473 cni.go:84] Creating CNI manager for ""
	I1227 08:27:38.359557    9473 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 08:27:38.359571    9473 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 08:27:38.359627    9473 start.go:353] cluster config:
	{Name:download-only-733339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-733339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:27:38.359866    9473 iso.go:125] acquiring lock: {Name:mkf3af0a60e6ccee2eeb813de50903ed5d7e8922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 08:27:38.361414    9473 out.go:99] Downloading VM boot image ...
	I1227 08:27:38.361471    9473 download.go:114] Downloading: https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22344-5516/.minikube/cache/iso/amd64/minikube-v1.37.0-1766719468-22158-amd64.iso
	I1227 08:27:41.392450    9473 out.go:99] Starting "download-only-733339" primary control-plane node in "download-only-733339" cluster
	I1227 08:27:41.392488    9473 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 08:27:41.408966    9473 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1227 08:27:41.409012    9473 cache.go:65] Caching tarball of preloaded images
	I1227 08:27:41.409235    9473 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 08:27:41.410906    9473 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 08:27:41.410923    9473 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1227 08:27:41.410929    9473 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1227 08:27:41.435722    9473 preload.go:313] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1227 08:27:41.435871    9473 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-733339 host does not exist
	  To start a cluster, run: "minikube start -p download-only-733339"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-733339
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-154347 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-154347 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=kvm2 : (3.030371908s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 08:27:48.546889    9461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 08:27:48.546937    9461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-154347
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-154347: exit status 85 (68.474605ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-733339 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-733339 │ jenkins │ v1.37.0 │ 27 Dec 25 08:27 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 08:27 UTC │ 27 Dec 25 08:27 UTC │
	│ delete  │ -p download-only-733339                                                                                                                         │ download-only-733339 │ jenkins │ v1.37.0 │ 27 Dec 25 08:27 UTC │ 27 Dec 25 08:27 UTC │
	│ start   │ -o=json --download-only -p download-only-154347 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=kvm2 │ download-only-154347 │ jenkins │ v1.37.0 │ 27 Dec 25 08:27 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 08:27:45
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 08:27:45.566237    9680 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:27:45.566348    9680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:27:45.566357    9680 out.go:374] Setting ErrFile to fd 2...
	I1227 08:27:45.566361    9680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:27:45.566593    9680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:27:45.567062    9680 out.go:368] Setting JSON to true
	I1227 08:27:45.567813    9680 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":616,"bootTime":1766823450,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:27:45.567860    9680 start.go:143] virtualization: kvm guest
	I1227 08:27:45.569723    9680 out.go:99] [download-only-154347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 08:27:45.569905    9680 notify.go:221] Checking for updates...
	I1227 08:27:45.571006    9680 out.go:171] MINIKUBE_LOCATION=22344
	I1227 08:27:45.572986    9680 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:27:45.574181    9680 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:27:45.575340    9680 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:27:45.576611    9680 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-154347 host does not exist
	  To start a cluster, run: "minikube start -p download-only-154347"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-154347
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 08:27:49.208118    9461 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-390044 --alsologtostderr --binary-mirror http://127.0.0.1:40647 --driver=kvm2 
helpers_test.go:176: Cleaning up "binary-mirror-390044" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-390044
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (100.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-771905 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-771905 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m40.017347951s)
helpers_test.go:176: Cleaning up "offline-docker-771905" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-771905
--- PASS: TestOffline (100.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-598566
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-598566: exit status 85 (66.33586ms)

                                                
                                                
-- stdout --
	* Profile "addons-598566" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-598566"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-598566
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-598566: exit status 85 (64.281976ms)

                                                
                                                
-- stdout --
	* Profile "addons-598566" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-598566"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (142.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-598566 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-598566 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m22.318834147s)
--- PASS: TestAddons/Setup (142.32s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 16.778797ms
addons_test.go:878: volcano-admission stabilized in 16.828201ms
addons_test.go:886: volcano-controller stabilized in 18.580916ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-lrdpn" [83171034-7e78-4e16-9581-206dd8eb0cab] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003477876s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-2xkd4" [1837c04a-138f-4793-95c5-946fd48d2419] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006549635s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-gmfgk" [7ab543af-1109-404c-adb6-ffd4f3877928] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004894284s
addons_test.go:905: (dbg) Run:  kubectl --context addons-598566 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-598566 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-598566 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [11ffa240-b16c-4593-95cb-e743506c1dc1] Pending
helpers_test.go:353: "test-job-nginx-0" [11ffa240-b16c-4593-95cb-e743506c1dc1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [11ffa240-b16c-4593-95cb-e743506c1dc1] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004325542s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable volcano --alsologtostderr -v=1: (11.792569079s)
--- PASS: TestAddons/serial/Volcano (43.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-598566 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-598566 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-598566 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-598566 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1963afff-6420-4914-90d6-1177ce66d6fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1963afff-6420-4914-90d6-1177ce66d6fc] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00442857s
addons_test.go:696: (dbg) Run:  kubectl --context addons-598566 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-598566 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-598566 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.487729ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-st9wg" [4573f620-e7f8-4ec7-82c1-f6a30dc83279] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005916388s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-2kzwf" [7f1f6151-6d31-48c1-b837-1b255db88011] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003351902s
addons_test.go:394: (dbg) Run:  kubectl --context addons-598566 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-598566 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-598566 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.194836608s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 ip
2025/12/27 08:31:29 [DEBUG] GET http://192.168.39.98:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.89s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.57s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 11.766549ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-598566
addons_test.go:334: (dbg) Run:  kubectl --context addons-598566 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-598566 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-598566 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-598566 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [cd966db5-117c-41ba-8489-d275333960a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [cd966db5-117c-41ba-8489-d275333960a9] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005354991s
I1227 08:31:43.967730    9461 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-598566 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.98
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable ingress-dns --alsologtostderr -v=1: (2.069444421s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable ingress --alsologtostderr -v=1: (7.691353778s)
--- PASS: TestAddons/parallel/Ingress (20.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-cb4cp" [972c7679-53fd-4534-bb86-71c3ad0dc976] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004321345s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable inspektor-gadget --alsologtostderr -v=1: (5.646972084s)
--- PASS: TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 8.005358ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-vnqxx" [1487a92c-2a6a-451e-a5a7-07f17d7787c7] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005795592s
addons_test.go:465: (dbg) Run:  kubectl --context addons-598566 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 08:31:30.396150    9461 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 08:31:30.405237    9461 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 08:31:30.405269    9461 kapi.go:107] duration metric: took 9.139103ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 9.152732ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-598566 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-598566 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [bf4c038a-510f-47a3-aa2e-cb2bf8d18c5a] Pending
helpers_test.go:353: "task-pv-pod" [bf4c038a-510f-47a3-aa2e-cb2bf8d18c5a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [bf4c038a-510f-47a3-aa2e-cb2bf8d18c5a] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.005424685s
addons_test.go:574: (dbg) Run:  kubectl --context addons-598566 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-598566 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-598566 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-598566 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-598566 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-598566 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-598566 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [b7e0d42b-554b-41e9-858f-2055be3fbe8e] Pending
helpers_test.go:353: "task-pv-pod-restore" [b7e0d42b-554b-41e9-858f-2055be3fbe8e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [b7e0d42b-554b-41e9-858f-2055be3fbe8e] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003775249s
addons_test.go:616: (dbg) Run:  kubectl --context addons-598566 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-598566 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-598566 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.05084446s)
--- PASS: TestAddons/parallel/CSI (35.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-598566 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-h5jdd" [9e651c6d-3fa1-4b18-a5e8-ed60f6d09f72] Pending
helpers_test.go:353: "headlamp-6d8d595f-h5jdd" [9e651c6d-3fa1-4b18-a5e8-ed60f6d09f72] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-h5jdd" [9e651c6d-3fa1-4b18-a5e8-ed60f6d09f72] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.009140099s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable headlamp --alsologtostderr -v=1: (6.248783981s)
--- PASS: TestAddons/parallel/Headlamp (22.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-q8fxf" [3ff9acea-3a16-4d8f-b1a0-ff43561c82ee] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004405602s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-598566 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-598566 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [59c92fb4-e630-4baf-aefa-f837c6a6b474] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [59c92fb4-e630-4baf-aefa-f837c6a6b474] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [59c92fb4-e630-4baf-aefa-f837c6a6b474] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004949935s
addons_test.go:969: (dbg) Run:  kubectl --context addons-598566 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 ssh "cat /opt/local-path-provisioner/pvc-bb71bdcc-fd39-4ad3-94bc-122978d957be_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-598566 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-598566 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.843648986s)
--- PASS: TestAddons/parallel/LocalPath (56.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-66fgp" [1597c1a4-b2d5-4e6e-871d-fcf53b464ece] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005230076s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-z589j" [b1ebd7d6-b176-4450-aefa-e1a4d9f926a9] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00422492s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-598566 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-598566 addons disable yakd --alsologtostderr -v=1: (6.504242866s)
--- PASS: TestAddons/parallel/Yakd (12.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-598566
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-598566: (14.057209518s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-598566
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-598566
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-598566
--- PASS: TestAddons/StoppedEnableDisable (14.25s)

                                                
                                    
x
+
TestCertOptions (48.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-897649 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1227 09:12:52.353251    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.358632    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.369002    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.389350    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.429750    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.510167    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.670513    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:52.991177    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:53.632169    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:54.913369    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:57.474216    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-897649 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (47.110926421s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-897649 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-897649 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-897649 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-897649" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-897649
E1227 09:13:12.835383    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestCertOptions (48.51s)

                                                
                                    
x
+
TestCertExpiration (310.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-715077 --memory=3072 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-715077 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m0.886507364s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-715077 --memory=3072 --cert-expiration=8760h --driver=kvm2 
E1227 09:13:33.316442    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:14.277437    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-715077 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m8.847879045s)
helpers_test.go:176: Cleaning up "cert-expiration-715077" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-715077
--- PASS: TestCertExpiration (310.62s)

                                                
                                    
x
+
TestDockerFlags (62.84s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-333641 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-333641 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m1.551088628s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-333641 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-333641 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-333641" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-333641
--- PASS: TestDockerFlags (62.84s)

                                                
                                    
x
+
TestForceSystemdFlag (80.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-022104 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-022104 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m19.686120843s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-022104 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-022104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-022104
--- PASS: TestForceSystemdFlag (80.90s)

                                                
                                    
x
+
TestForceSystemdEnv (102.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-633277 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-633277 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m40.951186002s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-633277 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-633277" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-633277
--- PASS: TestForceSystemdEnv (102.18s)

                                                
                                    
x
+
TestErrorSpam/setup (36.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-449609 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-449609 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-449609 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-449609 --driver=kvm2 : (36.900810646s)
--- PASS: TestErrorSpam/setup (36.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 pause
--- PASS: TestErrorSpam/pause (1.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (6.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 stop: (3.209617668s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 stop: (1.281955783s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-449609 --log_dir /tmp/nospam-449609 stop: (1.956216018s)
--- PASS: TestErrorSpam/stop (6.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22344-5516/.minikube/files/etc/test/nested/copy/9461/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553834 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-553834 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m21.944120056s)
--- PASS: TestFunctional/serial/StartWithProxy (81.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (58.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 08:34:35.839677    9461 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553834 --alsologtostderr -v=8
E1227 08:35:12.701912    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:12.707239    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:12.717584    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:12.737893    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:12.778289    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:12.858683    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:13.019127    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:13.339590    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:13.980584    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:15.261148    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:17.822209    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:22.942428    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:33.183224    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-553834 --alsologtostderr -v=8: (58.442443568s)
functional_test.go:678: soft start took 58.443051903s for "functional-553834" cluster.
I1227 08:35:34.282436    9461 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (58.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-553834 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-553834 /tmp/TestFunctionalserialCacheCmdcacheadd_local3063988486/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cache add minikube-local-cache-test:functional-553834
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cache delete minikube-local-cache-test:functional-553834
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-553834
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.90448ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 kubectl -- --context functional-553834 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-553834 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553834 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 08:35:53.663632    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-553834 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.26279313s)
functional_test.go:776: restart took 52.262938388s for "functional-553834" cluster.
I1227 08:36:31.795924    9461 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (52.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-553834 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 logs --file /tmp/TestFunctionalserialLogsFileCmd3672953854/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-553834 logs --file /tmp/TestFunctionalserialLogsFileCmd3672953854/001/logs.txt: (1.000888745s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-553834 apply -f testdata/invalidsvc.yaml
E1227 08:36:34.624520    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-553834
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-553834: exit status 115 (252.542774ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.57:31024 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-553834 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 config get cpus: exit status 14 (70.002483ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 config get cpus: exit status 14 (60.713318ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-553834 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-553834 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 15212: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553834 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553834 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (140.171565ms)

                                                
                                                
-- stdout --
	* [functional-553834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:37:11.809221   15023 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:37:11.809620   15023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:11.809637   15023 out.go:374] Setting ErrFile to fd 2...
	I1227 08:37:11.809644   15023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:11.809935   15023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:37:11.810648   15023 out.go:368] Setting JSON to false
	I1227 08:37:11.812057   15023 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1182,"bootTime":1766823450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:37:11.812151   15023 start.go:143] virtualization: kvm guest
	I1227 08:37:11.814096   15023 out.go:179] * [functional-553834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 08:37:11.815891   15023 notify.go:221] Checking for updates...
	I1227 08:37:11.815902   15023 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:37:11.817599   15023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:37:11.819066   15023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:37:11.820513   15023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:37:11.825656   15023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 08:37:11.827202   15023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:37:11.829950   15023 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:37:11.830466   15023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:37:11.869979   15023 out.go:179] * Using the kvm2 driver based on existing profile
	I1227 08:37:11.871184   15023 start.go:309] selected driver: kvm2
	I1227 08:37:11.871209   15023 start.go:928] validating driver "kvm2" against &{Name:functional-553834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-553834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:37:11.871365   15023 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:37:11.873821   15023 out.go:203] 
	W1227 08:37:11.875136   15023 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 08:37:11.876450   15023 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553834 --dry-run --alsologtostderr -v=1 --driver=kvm2 
I1227 08:37:11.979933    9461 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553834 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553834 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (126.881671ms)

                                                
                                                
-- stdout --
	* [functional-553834] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:37:12.081662   15074 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:37:12.082078   15074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:12.082091   15074 out.go:374] Setting ErrFile to fd 2...
	I1227 08:37:12.082096   15074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:12.082483   15074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:37:12.083530   15074 out.go:368] Setting JSON to false
	I1227 08:37:12.084454   15074 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1182,"bootTime":1766823450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 08:37:12.084526   15074 start.go:143] virtualization: kvm guest
	I1227 08:37:12.086199   15074 out.go:179] * [functional-553834] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1227 08:37:12.087781   15074 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:37:12.087833   15074 notify.go:221] Checking for updates...
	I1227 08:37:12.093141   15074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:37:12.095101   15074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	I1227 08:37:12.096610   15074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	I1227 08:37:12.097953   15074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 08:37:12.099278   15074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:37:12.101122   15074 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:37:12.101801   15074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:37:12.137160   15074 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1227 08:37:12.138577   15074 start.go:309] selected driver: kvm2
	I1227 08:37:12.138599   15074 start.go:928] validating driver "kvm2" against &{Name:functional-553834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-553834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:37:12.138725   15074 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:37:12.141677   15074 out.go:203] 
	W1227 08:37:12.143173   15074 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 08:37:12.144500   15074 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-553834 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-553834 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-64qqh" [36c6149c-c063-4011-af75-9f8ea6f25cdd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-64qqh" [36c6149c-c063-4011-af75-9f8ea6f25cdd] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 28.008716591s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.39.57:30955
functional_test.go:1685: http://192.168.39.57:30955: success! body:
Request served by hello-node-connect-5d95464fd4-64qqh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.57:30955
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [4f8f665a-d95e-4136-bcad-00fe873b7e59] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003030342s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-553834 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-553834 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-553834 get pvc myclaim -o=json
I1227 08:36:45.460665    9461 retry.go:84] will retry after 3s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:6fec93ce-1d59-42ef-80d7-c818c0f8d8da ResourceVersion:813 Generation:0 CreationTimestamp:2025-12-27 08:36:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b60610 VolumeMode:0xc001b60620 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-553834 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-553834 apply -f testdata/storage-provisioner/pod.yaml
I1227 08:36:48.653949    9461 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f460abd6-11c1-475e-b79e-9e68b44e2d8d] Pending
helpers_test.go:353: "sp-pod" [f460abd6-11c1-475e-b79e-9e68b44e2d8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f460abd6-11c1-475e-b79e-9e68b44e2d8d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.014499457s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-553834 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-553834 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-553834 delete -f testdata/storage-provisioner/pod.yaml: (1.864224812s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-553834 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4f370b81-c159-4457-ba4b-625ca973ca9e] Pending
helpers_test.go:353: "sp-pod" [4f370b81-c159-4457-ba4b-625ca973ca9e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [4f370b81-c159-4457-ba4b-625ca973ca9e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004949399s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-553834 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh -n functional-553834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cp functional-553834:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd979134976/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh -n functional-553834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh -n functional-553834 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (43.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-553834 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-5gc5p" [2b5174ac-ed62-49b3-8aa6-7241e293b856] Pending
helpers_test.go:353: "mysql-7d7b65bc95-5gc5p" [2b5174ac-ed62-49b3-8aa6-7241e293b856] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-5gc5p" [2b5174ac-ed62-49b3-8aa6-7241e293b856] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.004660225s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;": exit status 1 (163.42105ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1227 08:37:07.744261    9461 retry.go:84] will retry after 600ms: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;": exit status 1 (219.982988ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;": exit status 1 (413.760402ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;": exit status 1 (522.683128ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1227 08:37:13.671110    9461 retry.go:84] will retry after 2.2s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;": exit status 1 (262.203836ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-553834 exec mysql-7d7b65bc95-5gc5p -- mysql -ppassword -e "show databases;"
2025/12/27 08:37:26 [DEBUG] GET http://127.0.0.1:33333/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (43.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/9461/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /etc/test/nested/copy/9461/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/9461.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /etc/ssl/certs/9461.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/9461.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /usr/share/ca-certificates/9461.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/94612.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /etc/ssl/certs/94612.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/94612.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /usr/share/ca-certificates/94612.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-553834 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh "sudo systemctl is-active crio": exit status 1 (212.918504ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-553834 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-553834 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-4ttq4" [0b994138-774a-44f3-abea-e4cd32c7c687] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-4ttq4" [0b994138-774a-44f3-abea-e4cd32c7c687] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003207778s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 service list -o json
functional_test.go:1509: Took "447.627289ms" to run "out/minikube-linux-amd64 -p functional-553834 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.39.57:31339
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.39.57:31339
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-553834 docker-env) && out/minikube-linux-amd64 status -p functional-553834"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-553834 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "297.332233ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "58.77439ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "325.050955ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "68.883138ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdany-port2576708675/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766824612635758109" to /tmp/TestFunctionalparallelMountCmdany-port2576708675/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766824612635758109" to /tmp/TestFunctionalparallelMountCmdany-port2576708675/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766824612635758109" to /tmp/TestFunctionalparallelMountCmdany-port2576708675/001/test-1766824612635758109
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.943007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 08:36:52.826070    9461 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 08:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 08:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 08:36 test-1766824612635758109
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh cat /mount-9p/test-1766824612635758109
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-553834 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [956f607c-b7f0-48e6-9eb0-a216f27b0017] Pending
helpers_test.go:353: "busybox-mount" [956f607c-b7f0-48e6-9eb0-a216f27b0017] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [956f607c-b7f0-48e6-9eb0-a216f27b0017] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [956f607c-b7f0-48e6-9eb0-a216f27b0017] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.008524685s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-553834 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdany-port2576708675/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdspecific-port477075991/001:/mount-9p --alsologtostderr -v=1 --port 44389]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (179.736744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdspecific-port477075991/001:/mount-9p --alsologtostderr -v=1 --port 44389] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh "sudo umount -f /mount-9p": exit status 1 (235.038635ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-553834 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdspecific-port477075991/001:/mount-9p --alsologtostderr -v=1 --port 44389] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553834 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-553834
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553834 image ls --format short --alsologtostderr:
I1227 08:37:17.313515   15351 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:17.313748   15351 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:17.313756   15351 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:17.313761   15351 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:17.313957   15351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:37:17.314513   15351 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:17.314602   15351 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:17.316691   15351 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:17.318855   15351 main.go:144] libmachine: domain functional-553834 has defined MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:17.319290   15351 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:f0:32", ip: ""} in network mk-functional-553834: {Iface:virbr1 ExpiryTime:2025-12-27 09:33:28 +0000 UTC Type:0 Mac:52:54:00:97:f0:32 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-553834 Clientid:01:52:54:00:97:f0:32}
I1227 08:37:17.319317   15351 main.go:144] libmachine: domain functional-553834 has defined IP address 192.168.39.57 and MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:17.319437   15351 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/functional-553834/id_rsa Username:docker}
I1227 08:37:17.409480   15351 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553834 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                             │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/minikube-local-cache-test       │ functional-553834 │ 867c996cd1467 │ 30B    │
│ public.ecr.aws/docker/library/mysql               │ 8.4               │ 5e3dcc4ab5604 │ 785MB  │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 2c9a4b058bd7e │ 75.8MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                             │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ 550794e3b12ac │ 51.7MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                             │ latest            │ 350b164e7ae1d │ 240kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 04da2b0513cd7 │ 53.7MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-553834 │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ 5c6acd67e9cd1 │ 89.8MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ 32652ff1bbe6b │ 70.7MB │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553834 image ls --format table --alsologtostderr:
I1227 08:37:20.423025   15431 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:20.423307   15431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:20.423317   15431 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:20.423322   15431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:20.423575   15431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:37:20.424172   15431 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:20.424280   15431 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:20.426526   15431 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:20.428919   15431 main.go:144] libmachine: domain functional-553834 has defined MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:20.429381   15431 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:f0:32", ip: ""} in network mk-functional-553834: {Iface:virbr1 ExpiryTime:2025-12-27 09:33:28 +0000 UTC Type:0 Mac:52:54:00:97:f0:32 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-553834 Clientid:01:52:54:00:97:f0:32}
I1227 08:37:20.429412   15431 main.go:144] libmachine: domain functional-553834 has defined IP address 192.168.39.57 and MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:20.429583   15431 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/functional-553834/id_rsa Username:docker}
I1227 08:37:20.523826   15431 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553834 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4940000"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":[],"repoTags":["public.ecr.aws/ngi
nx/nginx:alpine"],"size":"53700000"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"89800000"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"75800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"51700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"867c996cd1467533944
d2a0447050ad41223115c7849030b6eac406401bab078","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-553834"],"size":"30"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"70700000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553834 image ls --format json --alsologtostderr:
I1227 08:37:20.232270   15420 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:20.232387   15420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:20.232396   15420 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:20.232403   15420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:20.232577   15420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:37:20.233193   15420 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:20.233306   15420 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:20.235395   15420 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:20.237719   15420 main.go:144] libmachine: domain functional-553834 has defined MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:20.238137   15420 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:f0:32", ip: ""} in network mk-functional-553834: {Iface:virbr1 ExpiryTime:2025-12-27 09:33:28 +0000 UTC Type:0 Mac:52:54:00:97:f0:32 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-553834 Clientid:01:52:54:00:97:f0:32}
I1227 08:37:20.238160   15420 main.go:144] libmachine: domain functional-553834 has defined IP address 192.168.39.57 and MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:20.238325   15420 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/functional-553834/id_rsa Username:docker}
I1227 08:37:20.324365   15420 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553834 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "75800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 867c996cd1467533944d2a0447050ad41223115c7849030b6eac406401bab078
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-553834
size: "30"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "89800000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "51700000"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "70700000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553834 image ls --format yaml --alsologtostderr:
I1227 08:37:17.523674   15362 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:17.523788   15362 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:17.523800   15362 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:17.523806   15362 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:17.524022   15362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:37:17.524561   15362 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:17.524650   15362 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:17.527032   15362 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:17.530004   15362 main.go:144] libmachine: domain functional-553834 has defined MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:17.530626   15362 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:f0:32", ip: ""} in network mk-functional-553834: {Iface:virbr1 ExpiryTime:2025-12-27 09:33:28 +0000 UTC Type:0 Mac:52:54:00:97:f0:32 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-553834 Clientid:01:52:54:00:97:f0:32}
I1227 08:37:17.530664   15362 main.go:144] libmachine: domain functional-553834 has defined IP address 192.168.39.57 and MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:17.530985   15362 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/functional-553834/id_rsa Username:docker}
I1227 08:37:17.627717   15362 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh pgrep buildkitd: exit status 1 (200.242211ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image build -t localhost/my-image:functional-553834 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-553834 image build -t localhost/my-image:functional-553834 testdata/build --alsologtostderr: (3.828628524s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553834 image build -t localhost/my-image:functional-553834 testdata/build --alsologtostderr:
I1227 08:37:17.936421   15399 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:17.936679   15399 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:17.936687   15399 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:17.936692   15399 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:17.936866   15399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
I1227 08:37:17.937476   15399 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:17.938198   15399 config.go:182] Loaded profile config "functional-553834": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 08:37:17.940544   15399 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:17.943087   15399 main.go:144] libmachine: domain functional-553834 has defined MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:17.943575   15399 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:f0:32", ip: ""} in network mk-functional-553834: {Iface:virbr1 ExpiryTime:2025-12-27 09:33:28 +0000 UTC Type:0 Mac:52:54:00:97:f0:32 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-553834 Clientid:01:52:54:00:97:f0:32}
I1227 08:37:17.943612   15399 main.go:144] libmachine: domain functional-553834 has defined IP address 192.168.39.57 and MAC address 52:54:00:97:f0:32 in network mk-functional-553834
I1227 08:37:17.943775   15399 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/functional-553834/id_rsa Username:docker}
I1227 08:37:18.038720   15399 build_images.go:162] Building image from path: /tmp/build.3804976608.tar
I1227 08:37:18.038817   15399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 08:37:18.059090   15399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3804976608.tar
I1227 08:37:18.067193   15399 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3804976608.tar: stat -c "%s %y" /var/lib/minikube/build/build.3804976608.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3804976608.tar': No such file or directory
I1227 08:37:18.067222   15399 ssh_runner.go:362] scp /tmp/build.3804976608.tar --> /var/lib/minikube/build/build.3804976608.tar (3072 bytes)
I1227 08:37:18.123725   15399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3804976608
I1227 08:37:18.144958   15399 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3804976608 -xf /var/lib/minikube/build/build.3804976608.tar
I1227 08:37:18.160542   15399 docker.go:364] Building image: /var/lib/minikube/build/build.3804976608
I1227 08:37:18.160602   15399 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-553834 /var/lib/minikube/build/build.3804976608
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:915ae5944f16ce5fa2d26c7ad62c5bf13ca42197735bea397eb26cec74849c8c
#8 writing image sha256:915ae5944f16ce5fa2d26c7ad62c5bf13ca42197735bea397eb26cec74849c8c done
#8 naming to localhost/my-image:functional-553834 done
#8 DONE 0.1s
I1227 08:37:21.654824   15399 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-553834 /var/lib/minikube/build/build.3804976608: (3.494194127s)
I1227 08:37:21.654924   15399 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3804976608
I1227 08:37:21.674970   15399 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3804976608.tar
I1227 08:37:21.694558   15399 build_images.go:218] Built localhost/my-image:functional-553834 from /tmp/build.3804976608.tar
I1227 08:37:21.694595   15399 build_images.go:134] succeeded building to: functional-553834
I1227 08:37:21.694600   15399 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076907252/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076907252/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076907252/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T" /mount1: exit status 1 (373.958716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-553834 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076907252/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076907252/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076907252/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-553834 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-553834
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-553834
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-553834
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (231.64s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-889109 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1227 09:08:15.750226    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-889109 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m33.553453982s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-889109 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-889109 cache add gcr.io/k8s-minikube/gvisor-addon:2: (4.082848481s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-889109 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-889109 addons enable gvisor: (4.16229823s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [8f675bc0-804f-429c-9830-fac72c71e828] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004424437s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-889109 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [933f467f-4502-4a37-9a73-401086f1b27c] Pending
helpers_test.go:353: "nginx-gvisor" [933f467f-4502-4a37-9a73-401086f1b27c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-gvisor" [933f467f-4502-4a37-9a73-401086f1b27c] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 50.005547071s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-889109
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-889109: (11.138880497s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-889109 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-889109 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (49.890869056s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [8f675bc0-804f-429c-9830-fac72c71e828] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:353: "gvisor" [8f675bc0-804f-429c-9830-fac72c71e828] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004503284s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [933f467f-4502-4a37-9a73-401086f1b27c] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004999634s
helpers_test.go:176: Cleaning up "gvisor-889109" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-889109
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-889109: (1.599313477s)
--- PASS: TestGvisorAddon (231.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1227 08:37:56.544967    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:40:12.702347    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:40:40.385580    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (3m32.87217651s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (213.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 kubectl -- rollout status deployment/busybox: (3.932673187s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-2fqz2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-47xfn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-wxqhg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-2fqz2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-47xfn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-wxqhg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-2fqz2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-47xfn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-wxqhg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-2fqz2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-2fqz2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-47xfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-47xfn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-wxqhg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 kubectl -- exec busybox-769dd8b7dd-wxqhg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node add --alsologtostderr -v 5
E1227 08:41:38.577010    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:38.582394    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:38.592750    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:38.613114    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:38.653428    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:38.733886    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:38.894374    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:39.214737    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:39.855292    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:41.135824    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:43.696296    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:48.817322    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 node add --alsologtostderr -v 5: (46.831531756s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-206429 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp testdata/cp-test.txt ha-206429:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1975172941/001/cp-test_ha-206429.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test.txt"
E1227 08:41:59.058731    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429:/home/docker/cp-test.txt ha-206429-m02:/home/docker/cp-test_ha-206429_ha-206429-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test_ha-206429_ha-206429-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429:/home/docker/cp-test.txt ha-206429-m03:/home/docker/cp-test_ha-206429_ha-206429-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test_ha-206429_ha-206429-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429:/home/docker/cp-test.txt ha-206429-m04:/home/docker/cp-test_ha-206429_ha-206429-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test_ha-206429_ha-206429-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp testdata/cp-test.txt ha-206429-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1975172941/001/cp-test_ha-206429-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m02:/home/docker/cp-test.txt ha-206429:/home/docker/cp-test_ha-206429-m02_ha-206429.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test_ha-206429-m02_ha-206429.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m02:/home/docker/cp-test.txt ha-206429-m03:/home/docker/cp-test_ha-206429-m02_ha-206429-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test_ha-206429-m02_ha-206429-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m02:/home/docker/cp-test.txt ha-206429-m04:/home/docker/cp-test_ha-206429-m02_ha-206429-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test_ha-206429-m02_ha-206429-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp testdata/cp-test.txt ha-206429-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1975172941/001/cp-test_ha-206429-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m03:/home/docker/cp-test.txt ha-206429:/home/docker/cp-test_ha-206429-m03_ha-206429.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test_ha-206429-m03_ha-206429.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m03:/home/docker/cp-test.txt ha-206429-m02:/home/docker/cp-test_ha-206429-m03_ha-206429-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test_ha-206429-m03_ha-206429-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m03:/home/docker/cp-test.txt ha-206429-m04:/home/docker/cp-test_ha-206429-m03_ha-206429-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test_ha-206429-m03_ha-206429-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp testdata/cp-test.txt ha-206429-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1975172941/001/cp-test_ha-206429-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m04:/home/docker/cp-test.txt ha-206429:/home/docker/cp-test_ha-206429-m04_ha-206429.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429 "sudo cat /home/docker/cp-test_ha-206429-m04_ha-206429.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m04:/home/docker/cp-test.txt ha-206429-m02:/home/docker/cp-test_ha-206429-m04_ha-206429-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m02 "sudo cat /home/docker/cp-test_ha-206429-m04_ha-206429-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 cp ha-206429-m04:/home/docker/cp-test.txt ha-206429-m03:/home/docker/cp-test_ha-206429-m04_ha-206429-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 ssh -n ha-206429-m03 "sudo cat /home/docker/cp-test_ha-206429-m04_ha-206429-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node stop m02 --alsologtostderr -v 5
E1227 08:42:19.539606    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 node stop m02 --alsologtostderr -v 5: (14.974057206s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5: exit status 7 (563.491667ms)

                                                
                                                
-- stdout --
	ha-206429
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-206429-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-206429-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-206429-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:42:23.723238   18279 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:42:23.723341   18279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:42:23.723345   18279 out.go:374] Setting ErrFile to fd 2...
	I1227 08:42:23.723349   18279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:42:23.723572   18279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:42:23.723735   18279 out.go:368] Setting JSON to false
	I1227 08:42:23.723757   18279 mustload.go:66] Loading cluster: ha-206429
	I1227 08:42:23.723993   18279 notify.go:221] Checking for updates...
	I1227 08:42:23.725003   18279 config.go:182] Loaded profile config "ha-206429": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:42:23.725038   18279 status.go:174] checking status of ha-206429 ...
	I1227 08:42:23.728103   18279 status.go:371] ha-206429 host status = "Running" (err=<nil>)
	I1227 08:42:23.728127   18279 host.go:66] Checking if "ha-206429" exists ...
	I1227 08:42:23.731360   18279 main.go:144] libmachine: domain ha-206429 has defined MAC address 52:54:00:54:f3:4e in network mk-ha-206429
	I1227 08:42:23.731847   18279 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:54:f3:4e", ip: ""} in network mk-ha-206429: {Iface:virbr1 ExpiryTime:2025-12-27 09:37:43 +0000 UTC Type:0 Mac:52:54:00:54:f3:4e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-206429 Clientid:01:52:54:00:54:f3:4e}
	I1227 08:42:23.731891   18279 main.go:144] libmachine: domain ha-206429 has defined IP address 192.168.39.7 and MAC address 52:54:00:54:f3:4e in network mk-ha-206429
	I1227 08:42:23.732133   18279 host.go:66] Checking if "ha-206429" exists ...
	I1227 08:42:23.732339   18279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:42:23.734921   18279 main.go:144] libmachine: domain ha-206429 has defined MAC address 52:54:00:54:f3:4e in network mk-ha-206429
	I1227 08:42:23.735423   18279 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:54:f3:4e", ip: ""} in network mk-ha-206429: {Iface:virbr1 ExpiryTime:2025-12-27 09:37:43 +0000 UTC Type:0 Mac:52:54:00:54:f3:4e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-206429 Clientid:01:52:54:00:54:f3:4e}
	I1227 08:42:23.735446   18279 main.go:144] libmachine: domain ha-206429 has defined IP address 192.168.39.7 and MAC address 52:54:00:54:f3:4e in network mk-ha-206429
	I1227 08:42:23.735651   18279 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/ha-206429/id_rsa Username:docker}
	I1227 08:42:23.824279   18279 ssh_runner.go:195] Run: systemctl --version
	I1227 08:42:23.832003   18279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:42:23.850835   18279 kubeconfig.go:125] found "ha-206429" server: "https://192.168.39.254:8443"
	I1227 08:42:23.850874   18279 api_server.go:166] Checking apiserver status ...
	I1227 08:42:23.850915   18279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:42:23.872378   18279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2596/cgroup
	I1227 08:42:23.887005   18279 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2596/cgroup
	I1227 08:42:23.899261   18279 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ff53a503aa9825ad3066e6aa2504957.slice/docker-84ea66d04f74fa13329a44c47d649e0f5b1becebcd65bc5151ddbe37f259a053.scope/cgroup.freeze
	I1227 08:42:23.911863   18279 api_server.go:299] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1227 08:42:23.919156   18279 api_server.go:325] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1227 08:42:23.919186   18279 status.go:463] ha-206429 apiserver status = Running (err=<nil>)
	I1227 08:42:23.919195   18279 status.go:176] ha-206429 status: &{Name:ha-206429 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:42:23.919219   18279 status.go:174] checking status of ha-206429-m02 ...
	I1227 08:42:23.920859   18279 status.go:371] ha-206429-m02 host status = "Stopped" (err=<nil>)
	I1227 08:42:23.920877   18279 status.go:384] host is not running, skipping remaining checks
	I1227 08:42:23.920882   18279 status.go:176] ha-206429-m02 status: &{Name:ha-206429-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:42:23.920895   18279 status.go:174] checking status of ha-206429-m03 ...
	I1227 08:42:23.922263   18279 status.go:371] ha-206429-m03 host status = "Running" (err=<nil>)
	I1227 08:42:23.922279   18279 host.go:66] Checking if "ha-206429-m03" exists ...
	I1227 08:42:23.924739   18279 main.go:144] libmachine: domain ha-206429-m03 has defined MAC address 52:54:00:0e:34:8c in network mk-ha-206429
	I1227 08:42:23.925366   18279 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:34:8c", ip: ""} in network mk-ha-206429: {Iface:virbr1 ExpiryTime:2025-12-27 09:39:42 +0000 UTC Type:0 Mac:52:54:00:0e:34:8c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-206429-m03 Clientid:01:52:54:00:0e:34:8c}
	I1227 08:42:23.925402   18279 main.go:144] libmachine: domain ha-206429-m03 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:34:8c in network mk-ha-206429
	I1227 08:42:23.925631   18279 host.go:66] Checking if "ha-206429-m03" exists ...
	I1227 08:42:23.925926   18279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:42:23.928300   18279 main.go:144] libmachine: domain ha-206429-m03 has defined MAC address 52:54:00:0e:34:8c in network mk-ha-206429
	I1227 08:42:23.928653   18279 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:34:8c", ip: ""} in network mk-ha-206429: {Iface:virbr1 ExpiryTime:2025-12-27 09:39:42 +0000 UTC Type:0 Mac:52:54:00:0e:34:8c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-206429-m03 Clientid:01:52:54:00:0e:34:8c}
	I1227 08:42:23.928672   18279 main.go:144] libmachine: domain ha-206429-m03 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:34:8c in network mk-ha-206429
	I1227 08:42:23.928815   18279 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/ha-206429-m03/id_rsa Username:docker}
	I1227 08:42:24.013000   18279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:42:24.033753   18279 kubeconfig.go:125] found "ha-206429" server: "https://192.168.39.254:8443"
	I1227 08:42:24.033800   18279 api_server.go:166] Checking apiserver status ...
	I1227 08:42:24.033835   18279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:42:24.060119   18279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2461/cgroup
	I1227 08:42:24.074581   18279 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2461/cgroup
	I1227 08:42:24.088227   18279 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf02a9eba5e923101c5e25f1664d8db8d.slice/docker-3ef71bdcec645c8295af64c1da418016b50a09dac3839e6d1b411134cecb545f.scope/cgroup.freeze
	I1227 08:42:24.103656   18279 api_server.go:299] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1227 08:42:24.108925   18279 api_server.go:325] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1227 08:42:24.108953   18279 status.go:463] ha-206429-m03 apiserver status = Running (err=<nil>)
	I1227 08:42:24.108961   18279 status.go:176] ha-206429-m03 status: &{Name:ha-206429-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:42:24.108978   18279 status.go:174] checking status of ha-206429-m04 ...
	I1227 08:42:24.110604   18279 status.go:371] ha-206429-m04 host status = "Running" (err=<nil>)
	I1227 08:42:24.110629   18279 host.go:66] Checking if "ha-206429-m04" exists ...
	I1227 08:42:24.113226   18279 main.go:144] libmachine: domain ha-206429-m04 has defined MAC address 52:54:00:b9:eb:57 in network mk-ha-206429
	I1227 08:42:24.113734   18279 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:eb:57", ip: ""} in network mk-ha-206429: {Iface:virbr1 ExpiryTime:2025-12-27 09:41:25 +0000 UTC Type:0 Mac:52:54:00:b9:eb:57 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-206429-m04 Clientid:01:52:54:00:b9:eb:57}
	I1227 08:42:24.113766   18279 main.go:144] libmachine: domain ha-206429-m04 has defined IP address 192.168.39.77 and MAC address 52:54:00:b9:eb:57 in network mk-ha-206429
	I1227 08:42:24.113971   18279 host.go:66] Checking if "ha-206429-m04" exists ...
	I1227 08:42:24.114268   18279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:42:24.116814   18279 main.go:144] libmachine: domain ha-206429-m04 has defined MAC address 52:54:00:b9:eb:57 in network mk-ha-206429
	I1227 08:42:24.117274   18279 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:eb:57", ip: ""} in network mk-ha-206429: {Iface:virbr1 ExpiryTime:2025-12-27 09:41:25 +0000 UTC Type:0 Mac:52:54:00:b9:eb:57 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-206429-m04 Clientid:01:52:54:00:b9:eb:57}
	I1227 08:42:24.117304   18279 main.go:144] libmachine: domain ha-206429-m04 has defined IP address 192.168.39.77 and MAC address 52:54:00:b9:eb:57 in network mk-ha-206429
	I1227 08:42:24.117437   18279 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/ha-206429-m04/id_rsa Username:docker}
	I1227 08:42:24.208395   18279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:42:24.226125   18279 status.go:176] ha-206429-m04 status: &{Name:ha-206429-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 node start m02 --alsologtostderr -v 5: (24.696607531s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (155.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 stop --alsologtostderr -v 5
E1227 08:43:00.500226    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 stop --alsologtostderr -v 5: (41.592409484s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 start --wait true --alsologtostderr -v 5
E1227 08:44:22.421810    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:45:12.701877    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 start --wait true --alsologtostderr -v 5: (1m53.439171466s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (155.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 node delete m03 --alsologtostderr -v 5: (6.758675911s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 stop --alsologtostderr -v 5: (42.898795137s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5: exit status 7 (62.422934ms)

                                                
                                                
-- stdout --
	ha-206429
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-206429-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-206429-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:46:17.596739   19808 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:46:17.597015   19808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:46:17.597043   19808 out.go:374] Setting ErrFile to fd 2...
	I1227 08:46:17.597107   19808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:46:17.597421   19808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:46:17.597641   19808 out.go:368] Setting JSON to false
	I1227 08:46:17.597670   19808 mustload.go:66] Loading cluster: ha-206429
	I1227 08:46:17.597782   19808 notify.go:221] Checking for updates...
	I1227 08:46:17.598197   19808 config.go:182] Loaded profile config "ha-206429": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:46:17.598226   19808 status.go:174] checking status of ha-206429 ...
	I1227 08:46:17.600223   19808 status.go:371] ha-206429 host status = "Stopped" (err=<nil>)
	I1227 08:46:17.600240   19808 status.go:384] host is not running, skipping remaining checks
	I1227 08:46:17.600247   19808 status.go:176] ha-206429 status: &{Name:ha-206429 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:46:17.600267   19808 status.go:174] checking status of ha-206429-m02 ...
	I1227 08:46:17.601453   19808 status.go:371] ha-206429-m02 host status = "Stopped" (err=<nil>)
	I1227 08:46:17.601468   19808 status.go:384] host is not running, skipping remaining checks
	I1227 08:46:17.601474   19808 status.go:176] ha-206429-m02 status: &{Name:ha-206429-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:46:17.601493   19808 status.go:174] checking status of ha-206429-m04 ...
	I1227 08:46:17.602562   19808 status.go:371] ha-206429-m04 host status = "Stopped" (err=<nil>)
	I1227 08:46:17.602577   19808 status.go:384] host is not running, skipping remaining checks
	I1227 08:46:17.602583   19808 status.go:176] ha-206429-m04 status: &{Name:ha-206429-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (109.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 start --wait true --alsologtostderr -v 5 --driver=kvm2 
E1227 08:46:38.575892    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:47:06.262280    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (1m48.981050514s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (109.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (102.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-206429 node add --control-plane --alsologtostderr -v 5: (1m41.650271943s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-206429 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (102.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-009066 --driver=kvm2 
E1227 08:50:12.704830    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-009066 --driver=kvm2 : (37.035321355s)
--- PASS: TestImageBuild/serial/Setup (37.04s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-009066
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-009066: (1.484446106s)
--- PASS: TestImageBuild/serial/NormalBuild (1.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-009066
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-009066: (1.018806193s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-009066
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-009066
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-584174 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E1227 08:51:35.748037    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:51:38.577990    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-584174 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m22.920268172s)
--- PASS: TestJSONOutput/start/Command (82.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-584174 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-584174 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-584174 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-584174 --output=json --user=testUser: (11.807146416s)
--- PASS: TestJSONOutput/stop/Command (11.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-635110 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-635110 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.966766ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"00edcb5b-33d4-4b67-853a-0b3abad05a68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-635110] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc687b3b-0fdc-4f56-8295-9cd3cdfbfadd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22344"}}
	{"specversion":"1.0","id":"33b2ea9b-64c7-4961-8a13-cccb3aeedc17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9117b06c-ffdd-48ee-90c6-78b9623bfbec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig"}}
	{"specversion":"1.0","id":"89eb74ec-9999-457c-a7f2-85ba08ff584e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube"}}
	{"specversion":"1.0","id":"76a82cc5-d7bc-4e68-83bd-12f0ee01dd89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"468ca204-7ebb-450e-98a9-373d4469dede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e8b04ca3-f782-4792-8d71-69c378a29d9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-635110" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-635110
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (79.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-739389 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-739389 --driver=kvm2 : (40.216437245s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-741777 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-741777 --driver=kvm2 : (36.727790064s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-739389
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-741777
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-741777" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-741777
helpers_test.go:176: Cleaning up "first-739389" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-739389
--- PASS: TestMinikubeProfile (79.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-817954 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-817954 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (19.293750159s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-817954 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-817954 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-834751 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-834751 --memory=3072 --mount-string /tmp/TestMountStartserial2539336940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (20.168142505s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-834751 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-834751 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-817954 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-834751 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-834751 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-834751
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-834751: (1.268720751s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-834751
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-834751: (18.399122921s)
--- PASS: TestMountStart/serial/RestartStopped (19.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-834751 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-834751 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (24.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-899276 -- rollout status deployment/busybox: (22.990586475s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (24.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-p4j54 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-899276 -- exec busybox-769dd8b7dd-pjzv6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-899276 -v=5 --alsologtostderr
E1227 08:56:38.576085    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-899276 -v=5 --alsologtostderr: (45.268626382s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.74s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp testdata/cp-test.txt multinode-899276:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4183166969/001/cp-test_multinode-899276.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276:/home/docker/cp-test.txt multinode-899276-m02:/home/docker/cp-test_multinode-899276_multinode-899276-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m02 "sudo cat /home/docker/cp-test_multinode-899276_multinode-899276-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276:/home/docker/cp-test.txt multinode-899276-m03:/home/docker/cp-test_multinode-899276_multinode-899276-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m03 "sudo cat /home/docker/cp-test_multinode-899276_multinode-899276-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp testdata/cp-test.txt multinode-899276-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4183166969/001/cp-test_multinode-899276-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276-m02:/home/docker/cp-test.txt multinode-899276:/home/docker/cp-test_multinode-899276-m02_multinode-899276.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276 "sudo cat /home/docker/cp-test_multinode-899276-m02_multinode-899276.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276-m02:/home/docker/cp-test.txt multinode-899276-m03:/home/docker/cp-test_multinode-899276-m02_multinode-899276-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m03 "sudo cat /home/docker/cp-test_multinode-899276-m02_multinode-899276-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp testdata/cp-test.txt multinode-899276-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4183166969/001/cp-test_multinode-899276-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276-m03:/home/docker/cp-test.txt multinode-899276:/home/docker/cp-test_multinode-899276-m03_multinode-899276.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276 "sudo cat /home/docker/cp-test_multinode-899276-m03_multinode-899276.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 cp multinode-899276-m03:/home/docker/cp-test.txt multinode-899276-m02:/home/docker/cp-test_multinode-899276-m03_multinode-899276-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 ssh -n multinode-899276-m02 "sudo cat /home/docker/cp-test_multinode-899276-m03_multinode-899276-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-899276 node stop m03: (1.799976004s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-899276 status: exit status 7 (361.281721ms)

                                                
                                                
-- stdout --
	multinode-899276
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-899276-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-899276-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr: exit status 7 (354.258937ms)

                                                
                                                
-- stdout --
	multinode-899276
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-899276-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-899276-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:57:28.918076   25986 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:57:28.918314   25986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:28.918322   25986 out.go:374] Setting ErrFile to fd 2...
	I1227 08:57:28.918326   25986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:28.918518   25986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 08:57:28.918690   25986 out.go:368] Setting JSON to false
	I1227 08:57:28.918721   25986 mustload.go:66] Loading cluster: multinode-899276
	I1227 08:57:28.918845   25986 notify.go:221] Checking for updates...
	I1227 08:57:28.919070   25986 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 08:57:28.919086   25986 status.go:174] checking status of multinode-899276 ...
	I1227 08:57:28.921116   25986 status.go:371] multinode-899276 host status = "Running" (err=<nil>)
	I1227 08:57:28.921138   25986 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:57:28.923838   25986 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:57:28.924341   25986 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:57:28.924372   25986 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:57:28.924505   25986 host.go:66] Checking if "multinode-899276" exists ...
	I1227 08:57:28.924773   25986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:57:28.926933   25986 main.go:144] libmachine: domain multinode-899276 has defined MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:57:28.927275   25986 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:5c:b4", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:54:51 +0000 UTC Type:0 Mac:52:54:00:4c:5c:b4 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-899276 Clientid:01:52:54:00:4c:5c:b4}
	I1227 08:57:28.927293   25986 main.go:144] libmachine: domain multinode-899276 has defined IP address 192.168.39.24 and MAC address 52:54:00:4c:5c:b4 in network mk-multinode-899276
	I1227 08:57:28.927445   25986 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276/id_rsa Username:docker}
	I1227 08:57:29.015848   25986 ssh_runner.go:195] Run: systemctl --version
	I1227 08:57:29.022336   25986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:57:29.039089   25986 kubeconfig.go:125] found "multinode-899276" server: "https://192.168.39.24:8443"
	I1227 08:57:29.039143   25986 api_server.go:166] Checking apiserver status ...
	I1227 08:57:29.039200   25986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:57:29.059966   25986 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2401/cgroup
	I1227 08:57:29.071159   25986 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2401/cgroup
	I1227 08:57:29.082389   25986 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc404cd99462f198be5d3d2f10fc1e72e.slice/docker-14fb1b4cc933aa4cd12ef035c84747f011cfb35e74f0cb14e3690295fec82f89.scope/cgroup.freeze
	I1227 08:57:29.094583   25986 api_server.go:299] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1227 08:57:29.099612   25986 api_server.go:325] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1227 08:57:29.099641   25986 status.go:463] multinode-899276 apiserver status = Running (err=<nil>)
	I1227 08:57:29.099650   25986 status.go:176] multinode-899276 status: &{Name:multinode-899276 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:57:29.099666   25986 status.go:174] checking status of multinode-899276-m02 ...
	I1227 08:57:29.101355   25986 status.go:371] multinode-899276-m02 host status = "Running" (err=<nil>)
	I1227 08:57:29.101375   25986 host.go:66] Checking if "multinode-899276-m02" exists ...
	I1227 08:57:29.103962   25986 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:57:29.104435   25986 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:57:29.104459   25986 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:57:29.104631   25986 host.go:66] Checking if "multinode-899276-m02" exists ...
	I1227 08:57:29.104830   25986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:57:29.107225   25986 main.go:144] libmachine: domain multinode-899276-m02 has defined MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:57:29.107608   25986 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:64", ip: ""} in network mk-multinode-899276: {Iface:virbr1 ExpiryTime:2025-12-27 09:55:56 +0000 UTC Type:0 Mac:52:54:00:9b:0b:64 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-899276-m02 Clientid:01:52:54:00:9b:0b:64}
	I1227 08:57:29.107631   25986 main.go:144] libmachine: domain multinode-899276-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:9b:0b:64 in network mk-multinode-899276
	I1227 08:57:29.107789   25986 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22344-5516/.minikube/machines/multinode-899276-m02/id_rsa Username:docker}
	I1227 08:57:29.192236   25986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:57:29.210591   25986 status.go:176] multinode-899276-m02 status: &{Name:multinode-899276-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:57:29.210628   25986 status.go:174] checking status of multinode-899276-m03 ...
	I1227 08:57:29.212323   25986 status.go:371] multinode-899276-m03 host status = "Stopped" (err=<nil>)
	I1227 08:57:29.212349   25986 status.go:384] host is not running, skipping remaining checks
	I1227 08:57:29.212356   25986 status.go:176] multinode-899276-m03 status: &{Name:multinode-899276-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 node start m03 -v=5 --alsologtostderr
E1227 08:58:01.623854    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-899276 node start m03 -v=5 --alsologtostderr: (43.280073432s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (43.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (208.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-899276
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-899276
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-899276: (1m29.267063678s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-899276 --wait=true -v=5 --alsologtostderr
E1227 09:00:12.702140    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:01:38.576590    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-899276 --wait=true -v=5 --alsologtostderr: (1m58.714281686s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-899276
--- PASS: TestMultiNode/serial/RestartKeepsNodes (208.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-899276 node delete m03: (1.683216316s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-899276 stop: (23.74211671s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-899276 status: exit status 7 (60.701977ms)

                                                
                                                
-- stdout --
	multinode-899276
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-899276-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr: exit status 7 (61.714891ms)

                                                
                                                
-- stdout --
	multinode-899276
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-899276-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:02:07.140340   27514 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:02:07.140629   27514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:02:07.140641   27514 out.go:374] Setting ErrFile to fd 2...
	I1227 09:02:07.140646   27514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:02:07.140902   27514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 09:02:07.141138   27514 out.go:368] Setting JSON to false
	I1227 09:02:07.141174   27514 mustload.go:66] Loading cluster: multinode-899276
	I1227 09:02:07.141290   27514 notify.go:221] Checking for updates...
	I1227 09:02:07.141668   27514 config.go:182] Loaded profile config "multinode-899276": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:02:07.141687   27514 status.go:174] checking status of multinode-899276 ...
	I1227 09:02:07.144200   27514 status.go:371] multinode-899276 host status = "Stopped" (err=<nil>)
	I1227 09:02:07.144217   27514 status.go:384] host is not running, skipping remaining checks
	I1227 09:02:07.144222   27514 status.go:176] multinode-899276 status: &{Name:multinode-899276 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:02:07.144239   27514 status.go:174] checking status of multinode-899276-m02 ...
	I1227 09:02:07.145582   27514 status.go:371] multinode-899276-m02 host status = "Stopped" (err=<nil>)
	I1227 09:02:07.145602   27514 status.go:384] host is not running, skipping remaining checks
	I1227 09:02:07.145607   27514 status.go:176] multinode-899276-m02 status: &{Name:multinode-899276-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-899276 --wait=true -v=5 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-899276 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (1m26.808676682s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-899276 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-899276
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-899276-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-899276-m02 --driver=kvm2 : exit status 14 (78.605923ms)

                                                
                                                
-- stdout --
	* [multinode-899276-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-899276-m02' is duplicated with machine name 'multinode-899276-m02' in profile 'multinode-899276'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-899276-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-899276-m03 --driver=kvm2 : (37.792697359s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-899276
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-899276: exit status 80 (230.050451ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-899276 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-899276-m03 already exists in multinode-899276-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-899276-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.96s)

                                                
                                    
x
+
TestScheduledStopUnix (108.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-397867 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-397867 --memory=3072 --driver=kvm2 : (37.175832207s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-397867 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:04:53.767411   28902 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:04:53.767679   28902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:04:53.767690   28902 out.go:374] Setting ErrFile to fd 2...
	I1227 09:04:53.767694   28902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:04:53.767882   28902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 09:04:53.768136   28902 out.go:368] Setting JSON to false
	I1227 09:04:53.768216   28902 mustload.go:66] Loading cluster: scheduled-stop-397867
	I1227 09:04:53.768517   28902 config.go:182] Loaded profile config "scheduled-stop-397867": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:04:53.768598   28902 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/scheduled-stop-397867/config.json ...
	I1227 09:04:53.768782   28902 mustload.go:66] Loading cluster: scheduled-stop-397867
	I1227 09:04:53.768880   28902 config.go:182] Loaded profile config "scheduled-stop-397867": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-397867 -n scheduled-stop-397867
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-397867 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:04:54.082695   28948 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:04:54.083004   28948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:04:54.083026   28948 out.go:374] Setting ErrFile to fd 2...
	I1227 09:04:54.083034   28948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:04:54.083328   28948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 09:04:54.083678   28948 out.go:368] Setting JSON to false
	I1227 09:04:54.083919   28948 daemonize_unix.go:73] killing process 28937 as it is an old scheduled stop
	I1227 09:04:54.084069   28948 mustload.go:66] Loading cluster: scheduled-stop-397867
	I1227 09:04:54.084527   28948 config.go:182] Loaded profile config "scheduled-stop-397867": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:04:54.084607   28948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/scheduled-stop-397867/config.json ...
	I1227 09:04:54.084787   28948 mustload.go:66] Loading cluster: scheduled-stop-397867
	I1227 09:04:54.084922   28948 config.go:182] Loaded profile config "scheduled-stop-397867": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 28937 is a zombie
I1227 09:04:54.089699    9461 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/scheduled-stop-397867/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-397867 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1227 09:05:12.705138    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-397867 -n scheduled-stop-397867
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-397867
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-397867 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:05:19.834459   29096 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:05:19.834761   29096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:19.834770   29096 out.go:374] Setting ErrFile to fd 2...
	I1227 09:05:19.834776   29096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:19.835440   29096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-5516/.minikube/bin
	I1227 09:05:19.835799   29096 out.go:368] Setting JSON to false
	I1227 09:05:19.835897   29096 mustload.go:66] Loading cluster: scheduled-stop-397867
	I1227 09:05:19.836284   29096 config.go:182] Loaded profile config "scheduled-stop-397867": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:05:19.836375   29096 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/scheduled-stop-397867/config.json ...
	I1227 09:05:19.836584   29096 mustload.go:66] Loading cluster: scheduled-stop-397867
	I1227 09:05:19.836704   29096 config.go:182] Loaded profile config "scheduled-stop-397867": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-397867
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-397867: exit status 7 (61.286251ms)

                                                
                                                
-- stdout --
	scheduled-stop-397867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-397867 -n scheduled-stop-397867
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-397867 -n scheduled-stop-397867: exit status 7 (60.406651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-397867" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-397867
--- PASS: TestScheduledStopUnix (108.85s)

                                                
                                    
x
+
TestSkaffold (118.98s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2731453002 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-458569 --memory=3072 --driver=kvm2 
E1227 09:06:38.578585    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-458569 --memory=3072 --driver=kvm2 : (37.530957816s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2731453002 run --minikube-profile skaffold-458569 --kube-context skaffold-458569 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2731453002 run --minikube-profile skaffold-458569 --kube-context skaffold-458569 --status-check=true --port-forward=false --interactive=false: (1m8.734105823s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-75c98f77d5-k8f42" [5c2b01a9-8143-4f80-a129-1c8a96370e78] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004032392s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-ccf97b6d5-z7dfx" [559fd85a-ea06-48f3-932b-7ff737f90a3d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00409624s
helpers_test.go:176: Cleaning up "skaffold-458569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-458569
--- PASS: TestSkaffold (118.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (352.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1498994926 start -p running-upgrade-920518 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1498994926 start -p running-upgrade-920518 --memory=3072 --vm-driver=kvm2 : (58.445897893s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-920518 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E1227 09:13:02.594953    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-920518 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (4m51.707925313s)
helpers_test.go:176: Cleaning up "running-upgrade-920518" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-920518
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-920518: (1.309793011s)
--- PASS: TestRunningBinaryUpgrade (352.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (186.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
E1227 09:10:12.701849    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (1m2.251573992s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-503213 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-503213 --alsologtostderr: (12.210234203s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-503213 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-503213 status --format={{.Host}}: exit status 7 (63.295095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 
E1227 09:11:38.576522    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 : (46.541230633s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-503213 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (98.046043ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-503213] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-503213
	    minikube start -p kubernetes-upgrade-503213 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5032132 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-503213 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-503213 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 : (1m2.811344703s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-503213" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-503213
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-503213: (1.977305604s)
--- PASS: TestKubernetesUpgrade (186.02s)

                                                
                                    
x
+
TestISOImage/Setup (22.53s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-230404 --no-kubernetes --memory=2500mb --driver=kvm2 
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-230404 --no-kubernetes --memory=2500mb --driver=kvm2 : (22.531837278s)
--- PASS: TestISOImage/Setup (22.53s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.949116294 start -p stopped-upgrade-665508 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.949116294 start -p stopped-upgrade-665508 --memory=3072 --vm-driver=kvm2 : (53.251592376s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.949116294 -p stopped-upgrade-665508 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.949116294 -p stopped-upgrade-665508 stop: (4.307322782s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-665508 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-665508 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (36.119130806s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (93.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-665508
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-665508: (1.027061594s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestPause/serial/Start (83.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-282693 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-282693 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m23.123389144s)
--- PASS: TestPause/serial/Start (83.12s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (126.92s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-615527 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-615527 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 : (1m51.608760821s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-615527 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-615527 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (1.364719127s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-615527
E1227 09:15:10.519999    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:15:12.701530    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-615527: (13.947562419s)
--- PASS: TestPreload/Start-NoPreload-PullImage (126.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-542439 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-542439 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (84.885764ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-542439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-5516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-5516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-542439 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-542439 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (39.349373634s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-542439 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (70.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-282693 --alsologtostderr -v=1 --driver=kvm2 
E1227 09:14:41.624875    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.038459    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.043840    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.054200    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.074556    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.114940    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.195391    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.356507    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:50.677241    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:51.317780    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:52.598086    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:14:55.159176    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:15:00.279733    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-282693 --alsologtostderr -v=1 --driver=kvm2 : (1m9.998436869s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (70.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-542439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-542439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (14.093823668s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-542439 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-542439 status -o json: exit status 2 (241.791368ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-542439","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-542439
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.21s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (48.89s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-615527 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-615527 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 : (48.682775341s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-615527 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (48.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-542439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E1227 09:15:31.000744    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:15:36.198102    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-542439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (31.815490364s)
--- PASS: TestNoKubernetes/serial/Start (31.82s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-282693 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-282693 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-282693 --output=json --layout=cluster: exit status 2 (252.941306ms)

                                                
                                                
-- stdout --
	{"Name":"pause-282693","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-282693","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-282693 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-282693 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-282693 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m31.208552089s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22344-5516/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-542439 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-542439 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.899093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-542439
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-542439: (1.377158727s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-542439 --driver=kvm2 
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-542439 --driver=kvm2 : (34.471229722s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1227 09:16:11.961777    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:38.576485    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/functional-553834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m30.922960586s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-542439 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-542439 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.032112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (105.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m45.572217474s)
--- PASS: TestNetworkPlugins/group/calico/Start (105.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-149759 "pgrep -a kubelet"
I1227 09:17:21.867631    9461 config.go:182] Loaded profile config "auto-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-psn7w" [36d39a79-489a-4179-8b58-75d58dd0cde8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-psn7w" [36d39a79-489a-4179-8b58-75d58dd0cde8] Running
E1227 09:17:33.882711    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00669911s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-b5cdc" [709e399d-d352-4f30-9158-6ddf5e9b19f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005517137s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-149759 "pgrep -a kubelet"
I1227 09:17:48.961997    9461 config.go:182] Loaded profile config "kindnet-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-np7gl" [0370fcb9-c693-4a6f-9be9-3db1aa573e6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-np7gl" [0370fcb9-c693-4a6f-9be9-3db1aa573e6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005088405s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m1.530288354s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1227 09:17:52.352445    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m25.06084052s)
--- PASS: TestNetworkPlugins/group/false/Start (85.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1227 09:18:20.038453    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m44.37297124s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-7ksjv" [b6f0fee9-74de-4ed8-a615-1b981ee22084] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-7ksjv" [b6f0fee9-74de-4ed8-a615-1b981ee22084] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005223027s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-149759 "pgrep -a kubelet"
I1227 09:18:32.958245    9461 config.go:182] Loaded profile config "calico-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rx7tg" [5cbe1e80-ffe1-4e29-b301-c09edf79b08c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rx7tg" [5cbe1e80-ffe1-4e29-b301-c09edf79b08c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006854931s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-149759 "pgrep -a kubelet"
I1227 09:18:52.902291    9461 config.go:182] Loaded profile config "custom-flannel-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zmh49" [96051a3e-1741-4b71-a28d-9c16de8cfa84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zmh49" [96051a3e-1741-4b71-a28d-9c16de8cfa84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005179156s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m4.301751454s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-149759 "pgrep -a kubelet"
I1227 09:19:17.333722    9461 config.go:182] Loaded profile config "false-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lb84q" [eac3ea7f-37e5-4674-a781-dc448a84c8de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lb84q" [eac3ea7f-37e5-4674-a781-dc448a84c8de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004700061s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m31.860137981s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (88.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E1227 09:19:50.037460    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-149759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m28.643568081s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (88.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-149759 "pgrep -a kubelet"
I1227 09:20:01.064775    9461 config.go:182] Loaded profile config "enable-default-cni-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7455x" [2c5c351c-c5ed-4630-8983-ea9d9ed1d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7455x" [2c5c351c-c5ed-4630-8983-ea9d9ed1d2ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00603091s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-79kfl" [2e60c99c-2755-4876-9453-093cbfcdcd90] Running
E1227 09:20:12.702016    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00596342s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-149759 "pgrep -a kubelet"
I1227 09:20:14.949698    9461 config.go:182] Loaded profile config "flannel-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-sjvn6" [99e192cb-f6cb-4007-9d81-0c54f80e30c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 09:20:17.723608    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-sjvn6" [99e192cb-f6cb-4007-9d81-0c54f80e30c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005744323s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (97.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-215700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-215700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m37.615480008s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (97.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-673179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-673179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0: (1m40.229786443s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-149759 "pgrep -a kubelet"
I1227 09:20:54.148298    9461 config.go:182] Loaded profile config "bridge-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-c2428" [67f3aa06-daf9-429c-840f-cd008039956d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-c2428" [67f3aa06-daf9-429c-840f-cd008039956d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.003353596s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-149759 "pgrep -a kubelet"
I1227 09:21:13.318844    9461 config.go:182] Loaded profile config "kubenet-149759": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-149759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mwnvg" [97365137-62d8-45d5-8047-91388f4aa244] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mwnvg" [97365137-62d8-45d5-8047-91388f4aa244] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.005213179s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-130321 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-130321 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0: (1m24.409401256s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-149759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-149759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-646181 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-646181 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0: (1m30.025693233s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-215700 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a31cda53-342e-4b74-bfe9-9a9271000e13] Pending
helpers_test.go:353: "busybox" [a31cda53-342e-4b74-bfe9-9a9271000e13] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a31cda53-342e-4b74-bfe9-9a9271000e13] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003848393s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-215700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-215700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-215700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044423598s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-215700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-215700 --alsologtostderr -v=3
E1227 09:22:22.153663    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.158979    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.169308    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.189666    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.230012    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.310386    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.470951    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:22.791405    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:23.431956    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:24.712875    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-215700 --alsologtostderr -v=3: (12.67741127s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-673179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d14050de-6e9e-4e52-ae67-5f4e04199664] Pending
helpers_test.go:353: "busybox" [d14050de-6e9e-4e52-ae67-5f4e04199664] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1227 09:22:27.273676    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [d14050de-6e9e-4e52-ae67-5f4e04199664] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00501343s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-673179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-215700 -n old-k8s-version-215700
E1227 09:22:32.394671    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-215700 -n old-k8s-version-215700: exit status 7 (71.303357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-215700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-215700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-215700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (51.192742756s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-215700 -n old-k8s-version-215700
E1227 09:23:23.720353    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-673179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-673179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.119844825s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-673179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-673179 --alsologtostderr -v=3
E1227 09:22:42.635810    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:42.757454    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:42.763478    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:42.773848    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:42.794178    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:42.834558    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:42.914930    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:43.075413    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:43.395592    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:44.036830    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:45.317510    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:47.878436    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-673179 --alsologtostderr -v=3: (13.985511406s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-130321 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c50290c0-c85a-431b-9008-b1375104ebc8] Pending
helpers_test.go:353: "busybox" [c50290c0-c85a-431b-9008-b1375104ebc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1227 09:22:52.352796    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/skaffold-458569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [c50290c0-c85a-431b-9008-b1375104ebc8] Running
E1227 09:22:52.999002    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005662069s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-130321 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673179 -n no-preload-673179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673179 -n no-preload-673179: exit status 7 (65.75982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-673179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-673179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-673179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0: (44.290721265s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673179 -n no-preload-673179
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-130321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-130321 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-130321 --alsologtostderr -v=3
E1227 09:23:03.116555    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:03.239472    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-130321 --alsologtostderr -v=3: (14.890846316s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-646181 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2e78782e-7877-48df-a0de-78f2c2e73e8f] Pending
helpers_test.go:353: "busybox" [2e78782e-7877-48df-a0de-78f2c2e73e8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2e78782e-7877-48df-a0de-78f2c2e73e8f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.006396544s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-646181 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-130321 -n embed-certs-130321
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-130321 -n embed-certs-130321: exit status 7 (73.384354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-130321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-130321 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-130321 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0: (52.221896303s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-130321 -n embed-certs-130321
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-646181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-646181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.141932491s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-646181 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-646181 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-646181 --alsologtostderr -v=3: (14.603417718s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nhm94" [d70b1ae3-7369-47b0-9ec2-fbab1eca17b6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1227 09:23:26.771845    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:26.777184    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:26.787674    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:26.808039    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:26.848411    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:26.928825    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:27.089297    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:27.410188    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:28.050722    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:29.331402    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nhm94" [d70b1ae3-7369-47b0-9ec2-fbab1eca17b6] Running
E1227 09:23:31.891855    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005411654s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cbvtx" [1a4a87c1-09ea-4b0b-80ff-e3a8eba92713] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cbvtx" [1a4a87c1-09ea-4b0b-80ff-e3a8eba92713] Running
E1227 09:23:37.012606    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004349787s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nhm94" [d70b1ae3-7369-47b0-9ec2-fbab1eca17b6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004981415s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-215700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181: exit status 7 (63.168372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-646181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-646181 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-646181 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0: (46.514163017s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cbvtx" [1a4a87c1-09ea-4b0b-80ff-e3a8eba92713] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004730994s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-673179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-215700 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-215700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-215700 -n old-k8s-version-215700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-215700 -n old-k8s-version-215700: exit status 2 (281.783598ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-215700 -n old-k8s-version-215700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-215700 -n old-k8s-version-215700: exit status 2 (268.256128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-215700 --alsologtostderr -v=1
E1227 09:23:44.077724    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-215700 -n old-k8s-version-215700
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-215700 -n old-k8s-version-215700
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-673179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-673179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673179 -n no-preload-673179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673179 -n no-preload-673179: exit status 2 (250.008271ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-673179 -n no-preload-673179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-673179 -n no-preload-673179: exit status 2 (265.934623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-673179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673179 -n no-preload-673179
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-673179 -n no-preload-673179
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-062669 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0
E1227 09:23:47.252846    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-062669 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0: (1m2.632605523s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.63s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.32s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-928350 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2 
E1227 09:23:53.201332    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.206616    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.216964    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.237262    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.277618    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.357947    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.518941    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:53.839423    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-928350 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2 : (3.19552422s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-928350" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-928350
--- PASS: TestPreload/PreloadSrc/gcs (3.32s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.21s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-401332 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2 
E1227 09:23:54.480567    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:55.760963    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:23:58.321323    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-401332 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2 : (4.075209314s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-401332" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-401332
--- PASS: TestPreload/PreloadSrc/github (4.21s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (1.06s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-176141 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2 
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-176141" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-176141
--- PASS: TestPreload/PreloadSrc/gcs-cached (1.06s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.2s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1766570851-22316
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 156790275311057371301be2e28ce4b8d3758574
iso_test.go:118:   iso_version: v1.37.0-1766719468-22158
--- PASS: TestISOImage/VersionJSON (0.20s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-230404 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1227 09:24:03.442163    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:04.680556    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fm8d9" [f4c0c60e-b4ce-4c7d-8058-11b71f737d6e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1227 09:24:07.733115    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/calico-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fm8d9" [f4c0c60e-b4ce-4c7d-8058-11b71f737d6e] Running
E1227 09:24:13.682372    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.637831    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.643137    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.653412    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.673733    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.714085    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.794445    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.954922    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:18.275530    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:18.916643    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004975807s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fm8d9" [f4c0c60e-b4ce-4c7d-8058-11b71f737d6e] Running
E1227 09:24:20.197407    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:22.757999    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007622371s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-130321 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-130321 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-130321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-130321 -n embed-certs-130321
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-130321 -n embed-certs-130321: exit status 2 (289.828519ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-130321 -n embed-certs-130321
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-130321 -n embed-certs-130321: exit status 2 (280.372716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-130321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-130321 -n embed-certs-130321
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-130321 -n embed-certs-130321
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4rfh2" [8ca65d52-5459-4bbb-919a-2a609faec686] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005278156s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4rfh2" [8ca65d52-5459-4bbb-919a-2a609faec686] Running
E1227 09:24:34.162744    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004218954s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-646181 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-646181 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-646181 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181: exit status 2 (242.88345ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181: exit status 2 (240.318793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-646181 --alsologtostderr -v=1
E1227 09:24:38.119911    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-646181 -n default-k8s-diff-port-646181
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-062669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1227 09:24:50.037473    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/gvisor-889109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-062669 --alsologtostderr -v=3
E1227 09:24:55.750969    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:58.600860    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/false-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.351397    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.356717    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.367067    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.387435    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.428075    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.508425    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.668872    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:01.989554    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:02.630204    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:03.910388    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-062669 --alsologtostderr -v=3: (14.310143498s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (14.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-062669 -n newest-cni-062669
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-062669 -n newest-cni-062669: exit status 7 (62.131936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-062669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-062669 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0
E1227 09:25:05.998680    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/auto-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:06.471587    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:08.733724    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:08.739124    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:08.749452    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:08.769852    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:08.810216    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:08.890633    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:09.051176    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:09.371815    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:10.013021    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:11.293569    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:11.592414    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:12.701447    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/addons-598566/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:13.854424    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:15.122985    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/custom-flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:18.974966    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:21.832592    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/enable-default-cni-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:26.601168    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/kindnet-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:29.215611    9461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-5516/.minikube/profiles/flannel-149759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-062669 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0: (29.074878054s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-062669 -n newest-cni-062669
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-062669 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-062669 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-062669 -n newest-cni-062669
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-062669 -n newest-cni-062669: exit status 2 (235.525803ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-062669 -n newest-cni-062669
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-062669 -n newest-cni-062669: exit status 2 (237.645035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-062669 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-062669 -n newest-cni-062669
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-062669 -n newest-cni-062669
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    

Test skip (34/370)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.35.0/cached-images 0
15 TestDownloadOnly/v1.35.0/binaries 0
16 TestDownloadOnly/v1.35.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
215 TestKicCustomNetwork 0
216 TestKicExistingNetwork 0
217 TestKicCustomSubnet 0
218 TestKicStaticIP 0
250 TestChangeNoneUser 0
253 TestScheduledStopWindows 0
257 TestInsufficientStorage 0
261 TestMissingContainerUpgrade 0
273 TestNetworkPlugins/group/cilium 3.87
291 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-149759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-149759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-149759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-149759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149759"

                                                
                                                
----------------------- debugLogs end: cilium-149759 [took: 3.698291459s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-149759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-149759
--- SKIP: TestNetworkPlugins/group/cilium (3.87s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-309458" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-309458
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard