Test Report: KVM_Linux 22140

                    
                      eeabf3e2417ce4ab4b3a542afe843529230a6fb1:2025-12-17:42814
                    
                

Test fail (10/447)

x
+
TestNetworkPlugins/group/kubenet/Start (52.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E1217 01:37:19.415797  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubenet-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : exit status 80 (52.953867505s)

                                                
                                                
-- stdout --
	* [kubenet-739084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubenet-739084" primary control-plane node in "kubenet-739084" cluster
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:37:14.359453  420794 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:37:14.359723  420794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:37:14.359734  420794 out.go:374] Setting ErrFile to fd 2...
	I1217 01:37:14.359739  420794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:37:14.359985  420794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:37:14.360503  420794 out.go:368] Setting JSON to false
	I1217 01:37:14.361541  420794 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8373,"bootTime":1765927061,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 01:37:14.361648  420794 start.go:143] virtualization: kvm guest
	I1217 01:37:14.366039  420794 out.go:179] * [kubenet-739084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 01:37:14.367482  420794 out.go:179]   - MINIKUBE_LOCATION=22140
	I1217 01:37:14.367476  420794 notify.go:221] Checking for updates...
	I1217 01:37:14.369503  420794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:37:14.370556  420794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 01:37:14.371667  420794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 01:37:14.372741  420794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 01:37:14.373945  420794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:37:14.375413  420794 config.go:182] Loaded profile config "bridge-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:37:14.375517  420794 config.go:182] Loaded profile config "enable-default-cni-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:37:14.375598  420794 config.go:182] Loaded profile config "flannel-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:37:14.375665  420794 config.go:182] Loaded profile config "guest-625557": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1217 01:37:14.375823  420794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:37:14.412288  420794 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 01:37:14.413347  420794 start.go:309] selected driver: kvm2
	I1217 01:37:14.413370  420794 start.go:927] validating driver "kvm2" against <nil>
	I1217 01:37:14.413384  420794 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:37:14.414460  420794 start_flags.go:331] no existing cluster config was found, will generate one from the flags 
	I1217 01:37:14.414799  420794 start_flags.go:1016] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:37:14.414841  420794 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1217 01:37:14.414927  420794 start.go:353] cluster config:
	{Name:kubenet-739084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-739084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 01:37:14.415092  420794 iso.go:125] acquiring lock: {Name:mk68dcf288160193f263ebe6317eb4b124893df0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:37:14.417099  420794 out.go:179] * Starting "kubenet-739084" primary control-plane node in "kubenet-739084" cluster
	I1217 01:37:14.418097  420794 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1217 01:37:14.418143  420794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1217 01:37:14.418155  420794 cache.go:65] Caching tarball of preloaded images
	I1217 01:37:14.418276  420794 preload.go:238] Found /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:37:14.418294  420794 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1217 01:37:14.418408  420794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/config.json ...
	I1217 01:37:14.418428  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/config.json: {Name:mk35ba1da3544d79715dd880f129c6a07d5d567c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:14.418570  420794 start.go:360] acquireMachinesLock for kubenet-739084: {Name:mk3661de436507868e9140a67d3465855d5816bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 01:37:15.415512  420794 start.go:364] duration metric: took 996.898624ms to acquireMachinesLock for "kubenet-739084"
	I1217 01:37:15.415586  420794 start.go:93] Provisioning new machine with config: &{Name:kubenet-739084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.2 ClusterName:kubenet-739084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:37:15.415724  420794 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 01:37:15.417452  420794 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 01:37:15.417706  420794 start.go:159] libmachine.API.Create for "kubenet-739084" (driver="kvm2")
	I1217 01:37:15.417753  420794 client.go:173] LocalClient.Create starting
	I1217 01:37:15.417844  420794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca.pem
	I1217 01:37:15.417899  420794 main.go:143] libmachine: Decoding PEM data...
	I1217 01:37:15.417952  420794 main.go:143] libmachine: Parsing certificate...
	I1217 01:37:15.418057  420794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22140-379084/.minikube/certs/cert.pem
	I1217 01:37:15.418100  420794 main.go:143] libmachine: Decoding PEM data...
	I1217 01:37:15.418117  420794 main.go:143] libmachine: Parsing certificate...
	I1217 01:37:15.418598  420794 main.go:143] libmachine: creating domain...
	I1217 01:37:15.418617  420794 main.go:143] libmachine: creating network...
	I1217 01:37:15.420245  420794 main.go:143] libmachine: found existing default network
	I1217 01:37:15.420511  420794 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 01:37:15.421740  420794 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:b2:71} reservation:<nil>}
	I1217 01:37:15.422945  420794 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:19:cf:48} reservation:<nil>}
	I1217 01:37:15.423612  420794 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ed:58:75} reservation:<nil>}
	I1217 01:37:15.424400  420794 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:54:14:c6} reservation:<nil>}
	I1217 01:37:15.425636  420794 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c31fb0}
	I1217 01:37:15.425739  420794 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kubenet-739084</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 01:37:15.432485  420794 main.go:143] libmachine: creating private network mk-kubenet-739084 192.168.83.0/24...
	I1217 01:37:15.523367  420794 main.go:143] libmachine: private network mk-kubenet-739084 192.168.83.0/24 created
	I1217 01:37:15.523989  420794 main.go:143] libmachine: <network>
	  <name>mk-kubenet-739084</name>
	  <uuid>d5f7491a-cfbe-441c-81d3-5170701572e5</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:b9:65:0f'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 01:37:15.524042  420794 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084 ...
	I1217 01:37:15.524085  420794 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22140-379084/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 01:37:15.524106  420794 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 01:37:15.524207  420794 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22140-379084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22140-379084/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 01:37:15.823686  420794 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa...
	I1217 01:37:15.923620  420794 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/kubenet-739084.rawdisk...
	I1217 01:37:15.923682  420794 main.go:143] libmachine: Writing magic tar header
	I1217 01:37:15.923716  420794 main.go:143] libmachine: Writing SSH key tar header
	I1217 01:37:15.923883  420794 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084 ...
	I1217 01:37:15.923998  420794 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084
	I1217 01:37:15.924038  420794 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084 (perms=drwx------)
	I1217 01:37:15.924057  420794 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22140-379084/.minikube/machines
	I1217 01:37:15.924070  420794 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22140-379084/.minikube/machines (perms=drwxr-xr-x)
	I1217 01:37:15.924081  420794 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 01:37:15.924089  420794 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22140-379084/.minikube (perms=drwxr-xr-x)
	I1217 01:37:15.924098  420794 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22140-379084
	I1217 01:37:15.924105  420794 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22140-379084 (perms=drwxrwxr-x)
	I1217 01:37:15.924118  420794 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 01:37:15.924128  420794 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 01:37:15.924140  420794 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 01:37:15.924147  420794 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 01:37:15.924157  420794 main.go:143] libmachine: checking permissions on dir: /home
	I1217 01:37:15.924167  420794 main.go:143] libmachine: skipping /home - not owner
	I1217 01:37:15.924171  420794 main.go:143] libmachine: defining domain...
	I1217 01:37:15.925793  420794 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kubenet-739084</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/kubenet-739084.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kubenet-739084'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 01:37:15.932811  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:2d:09:f3 in network default
	I1217 01:37:15.933770  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:15.933798  420794 main.go:143] libmachine: starting domain...
	I1217 01:37:15.933817  420794 main.go:143] libmachine: ensuring networks are active...
	I1217 01:37:15.934948  420794 main.go:143] libmachine: Ensuring network default is active
	I1217 01:37:15.935528  420794 main.go:143] libmachine: Ensuring network mk-kubenet-739084 is active
	I1217 01:37:15.936309  420794 main.go:143] libmachine: getting domain XML...
	I1217 01:37:15.937740  420794 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kubenet-739084</name>
	  <uuid>0a70291e-7c8d-441a-8e17-f75f01a42724</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/kubenet-739084.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:9b:79:65'/>
	      <source network='mk-kubenet-739084'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:2d:09:f3'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 01:37:17.444692  420794 main.go:143] libmachine: waiting for domain to start...
	I1217 01:37:17.446366  420794 main.go:143] libmachine: domain is now running
	I1217 01:37:17.446383  420794 main.go:143] libmachine: waiting for IP...
	I1217 01:37:17.447236  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:17.447989  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:17.448007  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:17.448477  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:17.448530  420794 retry.go:31] will retry after 299.838521ms: waiting for domain to come up
	I1217 01:37:17.750428  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:17.751477  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:17.751502  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:17.752085  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:17.752134  420794 retry.go:31] will retry after 380.277187ms: waiting for domain to come up
	I1217 01:37:18.133862  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:18.134710  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:18.134746  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:18.135203  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:18.135250  420794 retry.go:31] will retry after 386.801502ms: waiting for domain to come up
	I1217 01:37:18.524053  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:18.524983  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:18.525009  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:18.525546  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:18.525594  420794 retry.go:31] will retry after 448.33192ms: waiting for domain to come up
	I1217 01:37:18.975478  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:18.976389  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:18.976414  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:18.976956  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:18.977012  420794 retry.go:31] will retry after 505.969311ms: waiting for domain to come up
	I1217 01:37:19.485397  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:19.486380  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:19.486406  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:19.486992  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:19.487045  420794 retry.go:31] will retry after 599.54321ms: waiting for domain to come up
	I1217 01:37:20.088167  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:20.088923  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:20.088946  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:20.089457  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:20.089505  420794 retry.go:31] will retry after 1.181645865s: waiting for domain to come up
	I1217 01:37:21.272952  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:21.273986  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:21.274010  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:21.274563  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:21.274616  420794 retry.go:31] will retry after 1.39417867s: waiting for domain to come up
	I1217 01:37:22.671388  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:22.672172  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:22.672192  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:22.672681  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:22.672723  420794 retry.go:31] will retry after 1.418833384s: waiting for domain to come up
	I1217 01:37:24.093298  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:24.094268  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:24.094291  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:24.094706  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:24.094749  420794 retry.go:31] will retry after 1.9811214s: waiting for domain to come up
	I1217 01:37:26.077288  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:26.078004  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:26.078024  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:26.078445  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:26.078497  420794 retry.go:31] will retry after 1.847842229s: waiting for domain to come up
	I1217 01:37:27.928127  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:27.928948  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:27.928979  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:27.929465  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:27.929511  420794 retry.go:31] will retry after 2.913906884s: waiting for domain to come up
	I1217 01:37:30.845254  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:30.846123  420794 main.go:143] libmachine: no network interface addresses found for domain kubenet-739084 (source=lease)
	I1217 01:37:30.846147  420794 main.go:143] libmachine: trying to list again with source=arp
	I1217 01:37:30.846606  420794 main.go:143] libmachine: unable to find current IP address of domain kubenet-739084 in network mk-kubenet-739084 (interfaces detected: [])
	I1217 01:37:30.846651  420794 retry.go:31] will retry after 4.291898165s: waiting for domain to come up
	I1217 01:37:35.140515  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.141424  420794 main.go:143] libmachine: domain kubenet-739084 has current primary IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.141444  420794 main.go:143] libmachine: found domain IP: 192.168.83.31
	I1217 01:37:35.141452  420794 main.go:143] libmachine: reserving static IP address...
	I1217 01:37:35.141868  420794 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kubenet-739084", mac: "52:54:00:9b:79:65", ip: "192.168.83.31"} in network mk-kubenet-739084
	I1217 01:37:35.359978  420794 main.go:143] libmachine: reserved static IP address 192.168.83.31 for domain kubenet-739084
	I1217 01:37:35.360006  420794 main.go:143] libmachine: waiting for SSH...
	I1217 01:37:35.360021  420794 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 01:37:35.363668  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.364179  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.364211  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.364429  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:35.364749  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:35.364766  420794 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 01:37:35.476475  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:37:35.477019  420794 main.go:143] libmachine: domain creation complete
	I1217 01:37:35.478922  420794 machine.go:94] provisionDockerMachine start ...
	I1217 01:37:35.481513  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.481971  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.481999  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.482190  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:35.482427  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:35.482440  420794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:37:35.585384  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 01:37:35.585413  420794 buildroot.go:166] provisioning hostname "kubenet-739084"
	I1217 01:37:35.588746  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.589233  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.589277  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.589525  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:35.589790  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:35.589812  420794 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-739084 && echo "kubenet-739084" | sudo tee /etc/hostname
	I1217 01:37:35.716866  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-739084
	
	I1217 01:37:35.720575  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.721177  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.721214  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.721497  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:35.721831  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:35.721860  420794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-739084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-739084/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-739084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:37:35.835412  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:37:35.835448  420794 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22140-379084/.minikube CaCertPath:/home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22140-379084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22140-379084/.minikube}
	I1217 01:37:35.835482  420794 buildroot.go:174] setting up certificates
	I1217 01:37:35.835496  420794 provision.go:84] configureAuth start
	I1217 01:37:35.839024  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.839614  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.839657  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.843296  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.843802  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.843834  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.844080  420794 provision.go:143] copyHostCerts
	I1217 01:37:35.844152  420794 exec_runner.go:144] found /home/jenkins/minikube-integration/22140-379084/.minikube/ca.pem, removing ...
	I1217 01:37:35.844195  420794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22140-379084/.minikube/ca.pem
	I1217 01:37:35.844280  420794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22140-379084/.minikube/ca.pem (1082 bytes)
	I1217 01:37:35.844427  420794 exec_runner.go:144] found /home/jenkins/minikube-integration/22140-379084/.minikube/cert.pem, removing ...
	I1217 01:37:35.844444  420794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22140-379084/.minikube/cert.pem
	I1217 01:37:35.844493  420794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22140-379084/.minikube/cert.pem (1123 bytes)
	I1217 01:37:35.844580  420794 exec_runner.go:144] found /home/jenkins/minikube-integration/22140-379084/.minikube/key.pem, removing ...
	I1217 01:37:35.844592  420794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22140-379084/.minikube/key.pem
	I1217 01:37:35.844630  420794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22140-379084/.minikube/key.pem (1675 bytes)
	I1217 01:37:35.844697  420794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22140-379084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca-key.pem org=jenkins.kubenet-739084 san=[127.0.0.1 192.168.83.31 kubenet-739084 localhost minikube]
	I1217 01:37:35.938995  420794 provision.go:177] copyRemoteCerts
	I1217 01:37:35.939059  420794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:37:35.942201  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.942616  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:35.942652  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:35.942861  420794 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa Username:docker}
	I1217 01:37:36.030280  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:37:36.061289  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 01:37:36.090690  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:37:36.126677  420794 provision.go:87] duration metric: took 291.147648ms to configureAuth
	I1217 01:37:36.126735  420794 buildroot.go:189] setting minikube options for container-runtime
	I1217 01:37:36.126949  420794 config.go:182] Loaded profile config "kubenet-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:37:36.130510  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:36.130976  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:36.131009  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:36.131227  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:36.131470  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:36.131484  420794 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:37:36.231897  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1217 01:37:36.231941  420794 buildroot.go:70] root file system type: tmpfs
	I1217 01:37:36.232074  420794 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:37:36.235549  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:36.235976  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:36.236006  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:36.236274  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:36.236573  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:36.236650  420794 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:37:36.359901  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:37:36.363884  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:36.364433  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:36.364474  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:36.364724  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:36.365050  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:36.365079  420794 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:37:37.369597  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1217 01:37:37.369635  420794 machine.go:97] duration metric: took 1.890689976s to provisionDockerMachine
	I1217 01:37:37.369652  420794 client.go:176] duration metric: took 21.951886764s to LocalClient.Create
	I1217 01:37:37.369700  420794 start.go:167] duration metric: took 21.951994124s to libmachine.API.Create "kubenet-739084"
	I1217 01:37:37.369716  420794 start.go:293] postStartSetup for "kubenet-739084" (driver="kvm2")
	I1217 01:37:37.369729  420794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:37:37.369817  420794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:37:37.373347  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.373963  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:37.373991  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.374165  420794 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa Username:docker}
	I1217 01:37:37.460641  420794 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:37:37.466458  420794 info.go:137] Remote host: Buildroot 2025.02
	I1217 01:37:37.466506  420794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22140-379084/.minikube/addons for local assets ...
	I1217 01:37:37.466600  420794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22140-379084/.minikube/files for local assets ...
	I1217 01:37:37.466703  420794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22140-379084/.minikube/files/etc/ssl/certs/3830082.pem -> 3830082.pem in /etc/ssl/certs
	I1217 01:37:37.466866  420794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:37:37.479074  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/files/etc/ssl/certs/3830082.pem --> /etc/ssl/certs/3830082.pem (1708 bytes)
	I1217 01:37:37.511689  420794 start.go:296] duration metric: took 141.95541ms for postStartSetup
	I1217 01:37:37.515688  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.516221  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:37.516252  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.516479  420794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/config.json ...
	I1217 01:37:37.516652  420794 start.go:128] duration metric: took 22.100912939s to createHost
	I1217 01:37:37.519149  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.519686  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:37.519727  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.519961  420794 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:37.520245  420794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I1217 01:37:37.520267  420794 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 01:37:37.623128  420794 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765935457.590036974
	
	I1217 01:37:37.623160  420794 fix.go:216] guest clock: 1765935457.590036974
	I1217 01:37:37.623168  420794 fix.go:229] Guest: 2025-12-17 01:37:37.590036974 +0000 UTC Remote: 2025-12-17 01:37:37.516664099 +0000 UTC m=+23.208432155 (delta=73.372875ms)
	I1217 01:37:37.623187  420794 fix.go:200] guest clock delta is within tolerance: 73.372875ms
	I1217 01:37:37.623193  420794 start.go:83] releasing machines lock for "kubenet-739084", held for 22.207647284s
	I1217 01:37:37.626158  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.626679  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:37.626704  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.627330  420794 ssh_runner.go:195] Run: cat /version.json
	I1217 01:37:37.627419  420794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:37:37.630872  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.631026  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.631367  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:37.631420  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:37.631457  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.631490  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:37.631647  420794 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa Username:docker}
	I1217 01:37:37.631819  420794 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa Username:docker}
	I1217 01:37:37.731657  420794 ssh_runner.go:195] Run: systemctl --version
	I1217 01:37:37.737887  420794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:37:37.744827  420794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:37:37.744889  420794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:37:37.765467  420794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:37:37.765531  420794 start.go:496] detecting cgroup driver to use...
	I1217 01:37:37.765664  420794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:37:37.787174  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:37:37.799263  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:37:37.811773  420794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:37:37.811854  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:37:37.827106  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:37:37.842166  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:37:37.862711  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:37:37.880572  420794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:37:37.900247  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:37:37.922441  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:37:37.943450  420794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:37:37.962458  420794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:37:37.977848  420794 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 01:37:37.977938  420794 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 01:37:37.998373  420794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:37:38.015139  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:37:38.233568  420794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:37:38.296086  420794 start.go:496] detecting cgroup driver to use...
	I1217 01:37:38.296194  420794 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:37:38.321136  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:37:38.344765  420794 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:37:38.378574  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:37:38.398775  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:37:38.415963  420794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:37:38.453251  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:37:38.472632  420794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:37:38.499980  420794 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:37:38.504759  420794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:37:38.517437  420794 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1217 01:37:38.538103  420794 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:37:38.720459  420794 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:37:38.900705  420794 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:37:38.900827  420794 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:37:38.927414  420794 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:37:38.947887  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:37:39.105164  420794 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:37:39.640144  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:37:39.662643  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:37:39.683567  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:37:39.703311  420794 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:37:39.904561  420794 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:37:40.060176  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:37:40.223699  420794 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:37:40.261589  420794 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:37:40.278213  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:37:40.442073  420794 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:37:40.549855  420794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:37:40.570600  420794 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:37:40.570684  420794 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:37:40.578062  420794 start.go:564] Will wait 60s for crictl version
	I1217 01:37:40.578130  420794 ssh_runner.go:195] Run: which crictl
	I1217 01:37:40.582430  420794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:37:40.624330  420794 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1217 01:37:40.624423  420794 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:37:40.655708  420794 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:37:40.690370  420794 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 28.5.2 ...
	I1217 01:37:40.693603  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:40.694185  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:37:40.694225  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:37:40.694543  420794 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1217 01:37:40.699203  420794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:37:40.715248  420794 kubeadm.go:884] updating cluster {Name:kubenet-739084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kubenet-739084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1217 01:37:40.715373  420794 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1217 01:37:40.715444  420794 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:37:40.732012  420794 docker.go:691] Got preloaded images: 
	I1217 01:37:40.732038  420794 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.2 wasn't preloaded
	I1217 01:37:40.732082  420794 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1217 01:37:40.746271  420794 ssh_runner.go:195] Run: which lz4
	I1217 01:37:40.751164  420794 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 01:37:40.756330  420794 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 01:37:40.756361  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284296555 bytes)
	I1217 01:37:41.887995  420794 docker.go:655] duration metric: took 1.136856041s to copy over tarball
	I1217 01:37:41.888096  420794 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 01:37:43.457133  420794 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.568996546s)
	I1217 01:37:43.457179  420794 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 01:37:43.511137  420794 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1217 01:37:43.529537  420794 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1217 01:37:43.552586  420794 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:37:43.574986  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:37:43.805368  420794 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:37:45.803435  420794 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.998021207s)
	I1217 01:37:45.803553  420794 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:37:45.832825  420794 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:37:45.832860  420794 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:37:45.832875  420794 kubeadm.go:935] updating node { 192.168.83.31 8443 v1.34.2 docker true true} ...
	I1217 01:37:45.833060  420794 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-739084 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.31 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-739084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:37:45.833146  420794 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:37:45.900957  420794 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1217 01:37:45.900997  420794 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:37:45.901029  420794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.31 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-739084 NodeName:kubenet-739084 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:37:45.901220  420794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-739084"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.31"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.31"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:37:45.901312  420794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:37:45.917026  420794 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:37:45.917106  420794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:37:45.935862  420794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1217 01:37:45.963848  420794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:37:45.992467  420794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 01:37:46.020856  420794 ssh_runner.go:195] Run: grep 192.168.83.31	control-plane.minikube.internal$ /etc/hosts
	I1217 01:37:46.025981  420794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:37:46.042027  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:37:46.236638  420794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:37:46.289772  420794 certs.go:69] Setting up /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084 for IP: 192.168.83.31
	I1217 01:37:46.289803  420794 certs.go:195] generating shared ca certs ...
	I1217 01:37:46.289832  420794 certs.go:227] acquiring lock for ca certs: {Name:mk3d9ab29bd1af55d3d6c3415e8609e0085bd1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.290078  420794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22140-379084/.minikube/ca.key
	I1217 01:37:46.290182  420794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22140-379084/.minikube/proxy-client-ca.key
	I1217 01:37:46.290202  420794 certs.go:257] generating profile certs ...
	I1217 01:37:46.290283  420794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/client.key
	I1217 01:37:46.290299  420794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/client.crt with IP's: []
	I1217 01:37:46.377557  420794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/client.crt ...
	I1217 01:37:46.377602  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/client.crt: {Name:mkbff0d360903ba825f75d05e363d36e5d78e5b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.377888  420794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/client.key ...
	I1217 01:37:46.377936  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/client.key: {Name:mk6cff7955a553f1a3c7ec6a2819eb43f9ae4618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.378825  420794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.key.9e482c59
	I1217 01:37:46.378857  420794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.crt.9e482c59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.31]
	I1217 01:37:46.443879  420794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.crt.9e482c59 ...
	I1217 01:37:46.443932  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.crt.9e482c59: {Name:mk68bca42bf3a7175e04d2f6ae4079ff369e410a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.444252  420794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.key.9e482c59 ...
	I1217 01:37:46.444282  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.key.9e482c59: {Name:mkc3809d70f537d8019f1658aa17cb2f29e758bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.444536  420794 certs.go:382] copying /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.crt.9e482c59 -> /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.crt
	I1217 01:37:46.444650  420794 certs.go:386] copying /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.key.9e482c59 -> /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.key
	I1217 01:37:46.444739  420794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.key
	I1217 01:37:46.444766  420794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.crt with IP's: []
	I1217 01:37:46.461945  420794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.crt ...
	I1217 01:37:46.461970  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.crt: {Name:mk90d281928ba90a36c7f5241c678b49aac68e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.462150  420794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.key ...
	I1217 01:37:46.462166  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.key: {Name:mk0e0c76dc932df09cee84fc72f3923dd6d4d9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:37:46.462420  420794 certs.go:484] found cert: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/383008.pem (1338 bytes)
	W1217 01:37:46.462462  420794 certs.go:480] ignoring /home/jenkins/minikube-integration/22140-379084/.minikube/certs/383008_empty.pem, impossibly tiny 0 bytes
	I1217 01:37:46.462536  420794 certs.go:484] found cert: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:37:46.462578  420794 certs.go:484] found cert: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:37:46.462615  420794 certs.go:484] found cert: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:37:46.462649  420794 certs.go:484] found cert: /home/jenkins/minikube-integration/22140-379084/.minikube/certs/key.pem (1675 bytes)
	I1217 01:37:46.462713  420794 certs.go:484] found cert: /home/jenkins/minikube-integration/22140-379084/.minikube/files/etc/ssl/certs/3830082.pem (1708 bytes)
	I1217 01:37:46.463545  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:37:46.507304  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:37:46.545719  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:37:46.591844  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:37:46.631385  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 01:37:46.670632  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1217 01:37:46.706780  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:37:46.744199  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kubenet-739084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:37:46.786405  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/certs/383008.pem --> /usr/share/ca-certificates/383008.pem (1338 bytes)
	I1217 01:37:46.828199  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/files/etc/ssl/certs/3830082.pem --> /usr/share/ca-certificates/3830082.pem (1708 bytes)
	I1217 01:37:46.868466  420794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22140-379084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:37:46.905737  420794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:37:46.930937  420794 ssh_runner.go:195] Run: openssl version
	I1217 01:37:46.939078  420794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:37:46.953417  420794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:37:46.972587  420794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:37:46.979173  420794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:38 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:37:46.979254  420794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:37:46.986978  420794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:37:47.001880  420794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:37:47.020881  420794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/383008.pem
	I1217 01:37:47.042030  420794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/383008.pem /etc/ssl/certs/383008.pem
	I1217 01:37:47.059953  420794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383008.pem
	I1217 01:37:47.065848  420794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:49 /usr/share/ca-certificates/383008.pem
	I1217 01:37:47.065933  420794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383008.pem
	I1217 01:37:47.074094  420794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:37:47.087756  420794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/383008.pem /etc/ssl/certs/51391683.0
	I1217 01:37:47.103097  420794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3830082.pem
	I1217 01:37:47.115995  420794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3830082.pem /etc/ssl/certs/3830082.pem
	I1217 01:37:47.129044  420794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3830082.pem
	I1217 01:37:47.134778  420794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:49 /usr/share/ca-certificates/3830082.pem
	I1217 01:37:47.134846  420794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3830082.pem
	I1217 01:37:47.142616  420794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:37:47.156500  420794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3830082.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:37:47.170764  420794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:37:47.176048  420794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:37:47.176132  420794 kubeadm.go:401] StartCluster: {Name:kubenet-739084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kubenet-739084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 01:37:47.176274  420794 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:37:47.196529  420794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:37:47.210497  420794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:37:47.222428  420794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:37:47.235369  420794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:37:47.235393  420794 kubeadm.go:158] found existing configuration files:
	
	I1217 01:37:47.235438  420794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:37:47.250390  420794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:37:47.250449  420794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:37:47.264954  420794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:37:47.278980  420794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:37:47.279049  420794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:37:47.292879  420794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:37:47.304389  420794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:37:47.304453  420794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:37:47.317444  420794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:37:47.329595  420794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:37:47.329658  420794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:37:47.340739  420794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 01:37:47.396727  420794 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 01:37:47.396805  420794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:37:47.513492  420794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:37:47.513687  420794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:37:47.513837  420794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:37:47.535122  420794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:37:47.537093  420794 out.go:252]   - Generating certificates and keys ...
	I1217 01:37:47.537208  420794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:37:47.537348  420794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:37:47.594451  420794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:37:48.034348  420794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:37:48.255818  420794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:37:48.708732  420794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:37:48.757693  420794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:37:48.757937  420794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-739084 localhost] and IPs [192.168.83.31 127.0.0.1 ::1]
	I1217 01:37:48.862970  420794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:37:48.863116  420794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-739084 localhost] and IPs [192.168.83.31 127.0.0.1 ::1]
	I1217 01:37:49.052469  420794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:37:49.895198  420794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:37:50.136578  420794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:37:50.136689  420794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:37:50.261114  420794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:37:50.591632  420794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:37:50.892044  420794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:37:51.152595  420794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:37:51.227818  420794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:37:51.228487  420794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:37:51.231836  420794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:37:51.233046  420794 out.go:252]   - Booting up control plane ...
	I1217 01:37:51.233167  420794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:37:51.233355  420794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:37:51.234513  420794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:37:51.257916  420794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:37:51.258074  420794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:37:51.267304  420794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:37:51.267798  420794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:37:51.267871  420794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:37:51.458080  420794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:37:51.458249  420794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:37:52.458687  420794 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001320687s
	I1217 01:37:52.461707  420794 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 01:37:52.461838  420794 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.83.31:8443/livez
	I1217 01:37:52.461992  420794 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 01:37:52.462131  420794 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 01:37:55.856968  420794 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.396206102s
	I1217 01:37:57.264616  420794 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.804316329s
	I1217 01:37:58.962619  420794 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502808708s
	I1217 01:37:58.980810  420794 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 01:37:58.995411  420794 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 01:37:59.013097  420794 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 01:37:59.013413  420794 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-739084 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 01:37:59.026016  420794 kubeadm.go:319] [bootstrap-token] Using token: 4eyvoq.4u1v65gvjgz3q74t
	I1217 01:37:59.027400  420794 out.go:252]   - Configuring RBAC rules ...
	I1217 01:37:59.027548  420794 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 01:37:59.036829  420794 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 01:37:59.046199  420794 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 01:37:59.049778  420794 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 01:37:59.053378  420794 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 01:37:59.063251  420794 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 01:37:59.371031  420794 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 01:37:59.844869  420794 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 01:38:00.373353  420794 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 01:38:00.373380  420794 kubeadm.go:319] 
	I1217 01:38:00.373426  420794 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 01:38:00.373431  420794 kubeadm.go:319] 
	I1217 01:38:00.373498  420794 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 01:38:00.373508  420794 kubeadm.go:319] 
	I1217 01:38:00.373541  420794 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 01:38:00.373634  420794 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 01:38:00.373682  420794 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 01:38:00.373689  420794 kubeadm.go:319] 
	I1217 01:38:00.373736  420794 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 01:38:00.373745  420794 kubeadm.go:319] 
	I1217 01:38:00.373797  420794 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 01:38:00.373823  420794 kubeadm.go:319] 
	I1217 01:38:00.373886  420794 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 01:38:00.374045  420794 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 01:38:00.374154  420794 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 01:38:00.374165  420794 kubeadm.go:319] 
	I1217 01:38:00.374265  420794 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 01:38:00.374368  420794 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 01:38:00.374377  420794 kubeadm.go:319] 
	I1217 01:38:00.374494  420794 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4eyvoq.4u1v65gvjgz3q74t \
	I1217 01:38:00.374615  420794 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ca58722b6fa750dc2bd4a2291e4cc69cf0e7070c9e2871502e34e19ae3eb0d75 \
	I1217 01:38:00.374637  420794 kubeadm.go:319] 	--control-plane 
	I1217 01:38:00.374643  420794 kubeadm.go:319] 
	I1217 01:38:00.374715  420794 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 01:38:00.374721  420794 kubeadm.go:319] 
	I1217 01:38:00.374820  420794 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4eyvoq.4u1v65gvjgz3q74t \
	I1217 01:38:00.374959  420794 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ca58722b6fa750dc2bd4a2291e4cc69cf0e7070c9e2871502e34e19ae3eb0d75 
	I1217 01:38:00.376443  420794 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:38:00.376480  420794 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1217 01:38:00.376511  420794 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 01:38:00.376616  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:00.376651  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-739084 minikube.k8s.io/updated_at=2025_12_17T01_38_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=01d79b9148bfd5d0fb5c58e68a67a3019fb121c2 minikube.k8s.io/name=kubenet-739084 minikube.k8s.io/primary=true
	I1217 01:38:00.533064  420794 ops.go:34] apiserver oom_adj: -16
	I1217 01:38:00.533081  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:01.033956  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:01.533224  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:02.033358  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:02.533999  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:03.033246  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:03.534105  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:04.034155  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:04.534050  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:05.033666  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:05.533467  420794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:38:05.661603  420794 kubeadm.go:1114] duration metric: took 5.285046737s to wait for elevateKubeSystemPrivileges
	I1217 01:38:05.661670  420794 kubeadm.go:403] duration metric: took 18.485541627s to StartCluster
	I1217 01:38:05.661696  420794 settings.go:142] acquiring lock: {Name:mkeb120f40878e424f656187ad4d7c7606f2c72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:38:05.661792  420794 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 01:38:05.663982  420794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/kubeconfig: {Name:mk87363b2fb37bff5c17f520ea75f91cfc69c318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:38:05.664289  420794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 01:38:05.664288  420794 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:38:05.664577  420794 config.go:182] Loaded profile config "kubenet-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:38:05.664613  420794 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:38:05.664676  420794 cache.go:107] acquiring lock: {Name:mkcbe05609d085a26759a10d252d7c9102a94b07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:38:05.664715  420794 addons.go:70] Setting storage-provisioner=true in profile "kubenet-739084"
	I1217 01:38:05.664735  420794 addons.go:239] Setting addon storage-provisioner=true in "kubenet-739084"
	I1217 01:38:05.664751  420794 cache.go:115] /home/jenkins/minikube-integration/22140-379084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1217 01:38:05.664761  420794 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/22140-379084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 107.918µs
	I1217 01:38:05.664771  420794 host.go:66] Checking if "kubenet-739084" exists ...
	I1217 01:38:05.664778  420794 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/22140-379084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1217 01:38:05.664787  420794 cache.go:87] Successfully saved all images to host disk.
	I1217 01:38:05.664900  420794 addons.go:70] Setting default-storageclass=true in profile "kubenet-739084"
	I1217 01:38:05.664938  420794 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-739084"
	I1217 01:38:05.665007  420794 config.go:182] Loaded profile config "kubenet-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:38:05.667731  420794 out.go:179] * Verifying Kubernetes components...
	I1217 01:38:05.669170  420794 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	W1217 01:38:05.669740  420794 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface client config: context "kubenet-739084" does not exist : client config: context "kubenet-739084" does not exist]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface client config: context "kubenet-739084" does not exist : client config: context "kubenet-739084" does not exist]
	I1217 01:38:05.669990  420794 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:38:05.670092  420794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:38:05.671654  420794 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:38:05.671674  420794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 01:38:05.673739  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:38:05.674581  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:38:05.674619  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:38:05.675095  420794 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa Username:docker}
	I1217 01:38:05.676156  420794 main.go:143] libmachine: domain kubenet-739084 has defined MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:38:05.676724  420794 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:79:65", ip: ""} in network mk-kubenet-739084: {Iface:virbr5 ExpiryTime:2025-12-17 02:37:32 +0000 UTC Type:0 Mac:52:54:00:9b:79:65 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:kubenet-739084 Clientid:01:52:54:00:9b:79:65}
	I1217 01:38:05.676760  420794 main.go:143] libmachine: domain kubenet-739084 has defined IP address 192.168.83.31 and MAC address 52:54:00:9b:79:65 in network mk-kubenet-739084
	I1217 01:38:05.676977  420794 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/kubenet-739084/id_rsa Username:docker}
	I1217 01:38:05.985329  420794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 01:38:06.066446  420794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:38:06.622631  420794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:38:07.230811  420794 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.245440388s)
	I1217 01:38:07.230845  420794 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	E1217 01:38:07.231546  420794 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: client: client config: context "kubenet-739084" does not exist
	I1217 01:38:07.231595  420794 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.165122399s)
	I1217 01:38:07.232034  420794 ssh_runner.go:235] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.562841937s)
	I1217 01:38:07.232067  420794 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:38:07.232076  420794 docker.go:697] gcr.io/k8s-minikube/gvisor-addon:2 wasn't preloaded
	I1217 01:38:07.232084  420794 cache_images.go:90] LoadCachedImages start: [gcr.io/k8s-minikube/gvisor-addon:2]
	I1217 01:38:07.234601  420794 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2
	I1217 01:38:07.239682  420794 out.go:203] 
	W1217 01:38:07.242129  420794 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: kubernetes client: client config: client config: context "kubenet-739084" does not exist
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: kubernetes client: client config: client config: context "kubenet-739084" does not exist
	W1217 01:38:07.242154  420794 out.go:285] * 
	* 
	W1217 01:38:07.246195  420794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:38:07.247457  420794 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (52.99s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /data | grep /data"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /data | grep /data": context deadline exceeded (3.039µs)
iso_test.go:99: failed to verify existence of "/data" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /data | grep /data\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//data (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker": context deadline exceeded (254ns)
iso_test.go:99: failed to verify existence of "/var/lib/docker" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /var/lib/docker | grep /var/lib/docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/docker (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni": context deadline exceeded (494ns)
iso_test.go:99: failed to verify existence of "/var/lib/cni" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /var/lib/cni | grep /var/lib/cni\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/cni (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet": context deadline exceeded (238ns)
iso_test.go:99: failed to verify existence of "/var/lib/kubelet" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/kubelet (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube": context deadline exceeded (305ns)
iso_test.go:99: failed to verify existence of "/var/lib/minikube" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /var/lib/minikube | grep /var/lib/minikube\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/minikube (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox": context deadline exceeded (293ns)
iso_test.go:99: failed to verify existence of "/var/lib/toolbox" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/toolbox (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker": context deadline exceeded (232ns)
iso_test.go:99: failed to verify existence of "/var/lib/boot2docker" mount. args "out/minikube-linux-amd64 -p guest-625557 ssh \"df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/boot2docker (0.00s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "cat /version.json"
iso_test.go:106: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "cat /version.json": context deadline exceeded (985ns)
iso_test.go:108: failed to read /version.json. args "out/minikube-linux-amd64 -p guest-625557 ssh \"cat /version.json\"": context deadline exceeded
--- FAIL: TestISOImage/VersionJSON (0.00s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
iso_test.go:125: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-625557 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'": context deadline exceeded (299ns)
iso_test.go:127: failed to verify existence of "/sys/kernel/btf/vmlinux" file: args "out/minikube-linux-amd64 -p guest-625557 ssh \"test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'\"": context deadline exceeded
iso_test.go:131: expected file "/sys/kernel/btf/vmlinux" to exist, but it does not. BTF types are required for CO-RE eBPF programs; set CONFIG_DEBUG_INFO_BTF in kernel configuration.
--- FAIL: TestISOImage/eBPFSupport (0.00s)

                                                
                                    

Test pass (392/447)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 26.53
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 8.28
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 9.71
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.68
31 TestOffline 82.07
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.25
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.25
36 TestAddons/Setup 201.71
38 TestAddons/serial/Volcano 45.36
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 10.62
44 TestAddons/parallel/Registry 18.08
45 TestAddons/parallel/RegistryCreds 0.65
46 TestAddons/parallel/Ingress 22.38
47 TestAddons/parallel/InspektorGadget 10.73
48 TestAddons/parallel/MetricsServer 5.92
50 TestAddons/parallel/CSI 45.74
51 TestAddons/parallel/Headlamp 25.27
52 TestAddons/parallel/CloudSpanner 5.51
53 TestAddons/parallel/LocalPath 55.72
54 TestAddons/parallel/NvidiaDevicePlugin 6.47
55 TestAddons/parallel/Yakd 10.79
57 TestAddons/StoppedEnableDisable 12.34
58 TestCertOptions 70.9
59 TestCertExpiration 321.98
60 TestDockerFlags 78.87
61 TestForceSystemdFlag 60.7
62 TestForceSystemdEnv 60.17
67 TestErrorSpam/setup 39.85
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.27
71 TestErrorSpam/unpause 1.52
72 TestErrorSpam/stop 14.41
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.22
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 49.17
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.18
84 TestFunctional/serial/CacheCmd/cache/add_local 1.53
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.06
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 55.08
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 0.98
95 TestFunctional/serial/LogsFileCmd 1
96 TestFunctional/serial/InvalidService 4.05
98 TestFunctional/parallel/ConfigCmd 0.44
99 TestFunctional/parallel/DashboardCmd 12.79
100 TestFunctional/parallel/DryRun 0.25
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.71
106 TestFunctional/parallel/ServiceCmdConnect 33.55
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 35.95
110 TestFunctional/parallel/SSHCmd 0.35
111 TestFunctional/parallel/CpCmd 1.16
112 TestFunctional/parallel/MySQL 42.35
113 TestFunctional/parallel/FileSync 0.18
114 TestFunctional/parallel/CertSync 1.15
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.19
122 TestFunctional/parallel/License 0.48
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
125 TestFunctional/parallel/ProfileCmd/profile_list 0.35
126 TestFunctional/parallel/MountCmd/any-port 7.21
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
128 TestFunctional/parallel/MountCmd/specific-port 1.71
129 TestFunctional/parallel/ServiceCmd/List 0.48
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
133 TestFunctional/parallel/ServiceCmd/Format 0.27
134 TestFunctional/parallel/ServiceCmd/URL 0.28
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 0.49
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
141 TestFunctional/parallel/ImageCommands/ImageBuild 5.82
142 TestFunctional/parallel/ImageCommands/Setup 2.12
143 TestFunctional/parallel/DockerEnv/bash 0.78
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.02
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 74.52
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 50.46
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.1
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.3
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.5
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.04
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 54.19
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.97
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.14
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 31.77
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.24
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.93
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 23.73
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 65.67
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.37
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.34
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 40.65
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.17
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.11
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
214 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.18
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.23
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.02
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.32
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.69
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.26
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.18
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.18
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.28
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.22
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.94
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.05
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash 0.83
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.75
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.09
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.09
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.68
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.31
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.36
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.61
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.37
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.36
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.3
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.32
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.37
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.44
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.54
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.31
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
260 TestGvisorAddon 224.82
263 TestMultiControlPlane/serial/StartCluster 206.79
264 TestMultiControlPlane/serial/DeployApp 6.65
265 TestMultiControlPlane/serial/PingHostFromPods 1.37
266 TestMultiControlPlane/serial/AddWorkerNode 46.65
267 TestMultiControlPlane/serial/NodeLabels 0.07
268 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
269 TestMultiControlPlane/serial/CopyFile 10.64
270 TestMultiControlPlane/serial/StopSecondaryNode 13.18
271 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
272 TestMultiControlPlane/serial/RestartSecondaryNode 29.44
273 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
274 TestMultiControlPlane/serial/RestartClusterKeepsNodes 163.33
275 TestMultiControlPlane/serial/DeleteSecondaryNode 7.01
276 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
277 TestMultiControlPlane/serial/StopCluster 37.68
278 TestMultiControlPlane/serial/RestartCluster 111.74
279 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
280 TestMultiControlPlane/serial/AddSecondaryNode 111.33
281 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
284 TestImageBuild/serial/Setup 40.01
285 TestImageBuild/serial/NormalBuild 1.5
286 TestImageBuild/serial/BuildWithBuildArg 0.92
287 TestImageBuild/serial/BuildWithDockerIgnore 0.64
288 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.83
293 TestJSONOutput/start/Command 80.01
294 TestJSONOutput/start/Audit 0
296 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/pause/Command 0.58
300 TestJSONOutput/pause/Audit 0
302 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/unpause/Command 0.55
306 TestJSONOutput/unpause/Audit 0
308 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
311 TestJSONOutput/stop/Command 14.01
312 TestJSONOutput/stop/Audit 0
314 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
315 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
316 TestErrorJSONOutput 0.23
321 TestMainNoArgs 0.06
322 TestMinikubeProfile 85.84
325 TestMountStart/serial/StartWithMountFirst 20.79
326 TestMountStart/serial/VerifyMountFirst 0.31
327 TestMountStart/serial/StartWithMountSecond 20.28
328 TestMountStart/serial/VerifyMountSecond 0.32
329 TestMountStart/serial/DeleteFirst 0.73
330 TestMountStart/serial/VerifyMountPostDelete 0.33
331 TestMountStart/serial/Stop 1.27
332 TestMountStart/serial/RestartStopped 20.39
333 TestMountStart/serial/VerifyMountPostStop 0.31
336 TestMultiNode/serial/FreshStart2Nodes 110.81
337 TestMultiNode/serial/DeployApp2Nodes 4.93
338 TestMultiNode/serial/PingHostFrom2Pods 0.89
339 TestMultiNode/serial/AddNode 47.03
340 TestMultiNode/serial/MultiNodeLabels 0.06
341 TestMultiNode/serial/ProfileList 0.44
342 TestMultiNode/serial/CopyFile 6
343 TestMultiNode/serial/StopNode 2.27
344 TestMultiNode/serial/StartAfterStop 38.13
345 TestMultiNode/serial/RestartKeepsNodes 162.41
346 TestMultiNode/serial/DeleteNode 2.02
347 TestMultiNode/serial/StopMultiNode 24.9
348 TestMultiNode/serial/RestartMultiNode 97.36
349 TestMultiNode/serial/ValidateNameConflict 42.77
354 TestPreload 138.9
356 TestScheduledStopUnix 110.8
357 TestSkaffold 120.87
360 TestRunningBinaryUpgrade 370.87
362 TestKubernetesUpgrade 155.28
366 TestISOImage/Setup 57.04
377 TestISOImage/Binaries/crictl 0.2
378 TestISOImage/Binaries/curl 0.19
379 TestISOImage/Binaries/docker 0.2
380 TestISOImage/Binaries/git 0.19
381 TestISOImage/Binaries/iptables 0.2
382 TestISOImage/Binaries/podman 0.18
383 TestISOImage/Binaries/rsync 0.18
384 TestISOImage/Binaries/socat 0.19
385 TestISOImage/Binaries/wget 0.18
386 TestISOImage/Binaries/VBoxControl 0.17
387 TestISOImage/Binaries/VBoxService 0.18
395 TestStoppedBinaryUpgrade/Setup 3.75
396 TestStoppedBinaryUpgrade/Upgrade 126.3
398 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
399 TestNoKubernetes/serial/StartWithK8s 45.61
400 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
402 TestPause/serial/Start 107.97
403 TestNoKubernetes/serial/StartWithStopK8s 15.34
404 TestNoKubernetes/serial/Start 21.81
405 TestNetworkPlugins/group/auto/Start 73.34
406 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
407 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
408 TestNoKubernetes/serial/ProfileList 1.41
409 TestNoKubernetes/serial/Stop 1.35
410 TestNoKubernetes/serial/StartNoArgs 35.08
411 TestPause/serial/SecondStartNoReconfiguration 62.94
412 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
413 TestNetworkPlugins/group/kindnet/Start 73.37
414 TestNetworkPlugins/group/auto/KubeletFlags 0.17
415 TestNetworkPlugins/group/auto/NetCatPod 11.26
416 TestNetworkPlugins/group/auto/DNS 0.21
417 TestNetworkPlugins/group/auto/Localhost 0.17
418 TestNetworkPlugins/group/auto/HairPin 0.13
419 TestNetworkPlugins/group/calico/Start 93.09
420 TestPause/serial/Pause 0.69
421 TestPause/serial/VerifyStatus 0.26
422 TestPause/serial/Unpause 0.8
423 TestPause/serial/PauseAgain 0.8
424 TestPause/serial/DeletePaused 0.98
425 TestPause/serial/VerifyDeletedResources 15.43
426 TestNetworkPlugins/group/custom-flannel/Start 63.02
427 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
428 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
429 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
430 TestNetworkPlugins/group/kindnet/DNS 0.16
431 TestNetworkPlugins/group/kindnet/Localhost 0.14
432 TestNetworkPlugins/group/kindnet/HairPin 0.14
433 TestNetworkPlugins/group/false/Start 90.69
434 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
435 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
436 TestNetworkPlugins/group/calico/ControllerPod 6.01
437 TestNetworkPlugins/group/calico/KubeletFlags 0.18
438 TestNetworkPlugins/group/calico/NetCatPod 12.26
439 TestNetworkPlugins/group/custom-flannel/DNS 0.19
440 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
441 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
442 TestNetworkPlugins/group/calico/DNS 0.27
443 TestNetworkPlugins/group/calico/Localhost 0.18
444 TestNetworkPlugins/group/calico/HairPin 0.21
445 TestNetworkPlugins/group/enable-default-cni/Start 87.64
446 TestNetworkPlugins/group/flannel/Start 83.05
447 TestNetworkPlugins/group/bridge/Start 91.28
448 TestNetworkPlugins/group/false/KubeletFlags 0.21
449 TestNetworkPlugins/group/false/NetCatPod 10.23
450 TestNetworkPlugins/group/false/DNS 0.26
451 TestNetworkPlugins/group/false/Localhost 0.15
452 TestNetworkPlugins/group/false/HairPin 0.22
454 TestNetworkPlugins/group/flannel/ControllerPod 6.01
455 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
456 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
457 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
458 TestNetworkPlugins/group/flannel/NetCatPod 12.27
459 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
460 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
461 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
462 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
463 TestNetworkPlugins/group/bridge/NetCatPod 11.26
464 TestNetworkPlugins/group/flannel/DNS 0.19
465 TestNetworkPlugins/group/flannel/Localhost 0.16
466 TestNetworkPlugins/group/flannel/HairPin 0.17
467 TestNetworkPlugins/group/bridge/DNS 0.21
468 TestNetworkPlugins/group/bridge/Localhost 0.16
469 TestNetworkPlugins/group/bridge/HairPin 0.16
471 TestStartStop/group/old-k8s-version/serial/FirstStart 99.41
473 TestStartStop/group/no-preload/serial/FirstStart 108.42
475 TestStartStop/group/embed-certs/serial/FirstStart 92.8
477 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 134.96
478 TestStartStop/group/old-k8s-version/serial/DeployApp 10.34
479 TestStartStop/group/embed-certs/serial/DeployApp 9.33
480 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
481 TestStartStop/group/old-k8s-version/serial/Stop 13.57
482 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
483 TestStartStop/group/no-preload/serial/DeployApp 9.33
484 TestStartStop/group/embed-certs/serial/Stop 14.08
485 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
486 TestStartStop/group/no-preload/serial/Stop 13.53
487 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
488 TestStartStop/group/old-k8s-version/serial/SecondStart 42.59
489 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
490 TestStartStop/group/embed-certs/serial/SecondStart 61.3
491 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
492 TestStartStop/group/no-preload/serial/SecondStart 67.01
493 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
494 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
495 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.62
496 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
497 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
498 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.37
499 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
500 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
501 TestStartStop/group/old-k8s-version/serial/Pause 3.14
502 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
504 TestStartStop/group/newest-cni/serial/FirstStart 53.71
505 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
506 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
507 TestStartStop/group/embed-certs/serial/Pause 3.78
508 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
519 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
520 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
521 TestStartStop/group/no-preload/serial/Pause 2.82
522 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
523 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
524 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
525 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.44
526 TestStartStop/group/newest-cni/serial/DeployApp 0
527 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.73
528 TestStartStop/group/newest-cni/serial/Stop 14.43
529 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
530 TestStartStop/group/newest-cni/serial/SecondStart 29.14
531 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
532 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
533 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
534 TestStartStop/group/newest-cni/serial/Pause 2.21
x
+
TestDownloadOnly/v1.28.0/json-events (26.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-288545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-288545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (26.53274083s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (26.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 00:37:41.182657  383008 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1217 00:37:41.182788  383008 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-288545
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-288545: exit status 85 (79.544322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-288545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-288545 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:37:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:37:14.705804  383020 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:14.705968  383020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:14.705979  383020 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:14.705984  383020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:14.706187  383020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	W1217 00:37:14.706310  383020 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22140-379084/.minikube/config/config.json: open /home/jenkins/minikube-integration/22140-379084/.minikube/config/config.json: no such file or directory
	I1217 00:37:14.707001  383020 out.go:368] Setting JSON to true
	I1217 00:37:14.708628  383020 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4774,"bootTime":1765927061,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:37:14.708694  383020 start.go:143] virtualization: kvm guest
	I1217 00:37:14.711457  383020 out.go:99] [download-only-288545] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 00:37:14.711576  383020 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 00:37:14.711636  383020 notify.go:221] Checking for updates...
	I1217 00:37:14.712830  383020 out.go:171] MINIKUBE_LOCATION=22140
	I1217 00:37:14.713980  383020 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:14.715088  383020 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:37:14.716175  383020 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:37:14.717156  383020 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:37:14.718933  383020 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:37:14.719172  383020 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:15.194807  383020 out.go:99] Using the kvm2 driver based on user configuration
	I1217 00:37:15.194835  383020 start.go:309] selected driver: kvm2
	I1217 00:37:15.194842  383020 start.go:927] validating driver "kvm2" against <nil>
	I1217 00:37:15.195239  383020 start_flags.go:331] no existing cluster config was found, will generate one from the flags 
	I1217 00:37:15.195773  383020 start_flags.go:414] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 00:37:15.195956  383020 start_flags.go:998] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:37:15.195981  383020 cni.go:84] Creating CNI manager for ""
	I1217 00:37:15.196045  383020 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:37:15.196054  383020 start_flags.go:340] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:37:15.196106  383020 start.go:353] cluster config:
	{Name:download-only-288545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-288545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:37:15.196595  383020 iso.go:125] acquiring lock: {Name:mk68dcf288160193f263ebe6317eb4b124893df0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:37:15.197936  383020 out.go:99] Downloading VM boot image ...
	I1217 00:37:15.197983  383020 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22140-379084/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 00:37:26.449466  383020 out.go:99] Starting "download-only-288545" primary control-plane node in "download-only-288545" cluster
	I1217 00:37:26.449512  383020 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 00:37:26.554830  383020 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1217 00:37:26.554899  383020 cache.go:65] Caching tarball of preloaded images
	I1217 00:37:26.555735  383020 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 00:37:26.557331  383020 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 00:37:26.557354  383020 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 00:37:26.671608  383020 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1217 00:37:26.671782  383020 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1217 00:37:39.618759  383020 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1217 00:37:39.619168  383020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/download-only-288545/config.json ...
	I1217 00:37:39.619200  383020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/download-only-288545/config.json: {Name:mkd73605a6cad8147925e51eab2473412be88e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:39.619375  383020 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 00:37:39.619566  383020 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22140-379084/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-288545 host does not exist
	  To start a cluster, run: "minikube start -p download-only-288545"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-288545
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (8.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-503728 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-503728 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 : (8.281278675s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (8.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1217 00:37:49.863254  383008 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1217 00:37:49.863323  383008 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-503728
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-503728: exit status 85 (74.647845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-288545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-288545 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete  │ -p download-only-288545                                                                                                                         │ download-only-288545 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -o=json --download-only -p download-only-503728 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 │ download-only-503728 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:37:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:37:41.636801  383301 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:41.637095  383301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:41.637112  383301 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:41.637119  383301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:41.637350  383301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:37:41.637855  383301 out.go:368] Setting JSON to true
	I1217 00:37:41.638768  383301 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4801,"bootTime":1765927061,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:37:41.638830  383301 start.go:143] virtualization: kvm guest
	I1217 00:37:41.640671  383301 out.go:99] [download-only-503728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:37:41.640901  383301 notify.go:221] Checking for updates...
	I1217 00:37:41.642110  383301 out.go:171] MINIKUBE_LOCATION=22140
	I1217 00:37:41.643453  383301 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:41.644595  383301 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:37:41.645664  383301 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:37:41.646703  383301 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:37:41.648565  383301 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:37:41.648776  383301 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:41.680955  383301 out.go:99] Using the kvm2 driver based on user configuration
	I1217 00:37:41.681011  383301 start.go:309] selected driver: kvm2
	I1217 00:37:41.681023  383301 start.go:927] validating driver "kvm2" against <nil>
	I1217 00:37:41.681349  383301 start_flags.go:331] no existing cluster config was found, will generate one from the flags 
	I1217 00:37:41.681791  383301 start_flags.go:414] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 00:37:41.681977  383301 start_flags.go:998] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:37:41.682003  383301 cni.go:84] Creating CNI manager for ""
	I1217 00:37:41.682075  383301 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:37:41.682086  383301 start_flags.go:340] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:37:41.682130  383301 start.go:353] cluster config:
	{Name:download-only-503728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-503728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:37:41.682231  383301 iso.go:125] acquiring lock: {Name:mk68dcf288160193f263ebe6317eb4b124893df0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:37:41.683366  383301 out.go:99] Starting "download-only-503728" primary control-plane node in "download-only-503728" cluster
	I1217 00:37:41.683388  383301 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1217 00:37:41.785135  383301 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1217 00:37:41.785177  383301 cache.go:65] Caching tarball of preloaded images
	I1217 00:37:41.786031  383301 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1217 00:37:41.787594  383301 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1217 00:37:41.787621  383301 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 00:37:41.899078  383301 preload.go:295] Got checksum from GCS API "cafa99c47d4d00983a02f051962239e0"
	I1217 00:37:41.899152  383301 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4?checksum=md5:cafa99c47d4d00983a02f051962239e0 -> /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-503728 host does not exist
	  To start a cluster, run: "minikube start -p download-only-503728"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-503728
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (9.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-059874 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-059874 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 : (9.712492446s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (9.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1217 00:37:59.961985  383008 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1217 00:37:59.962043  383008 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-059874
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-059874: exit status 85 (72.088497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-288545 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2        │ download-only-288545 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete  │ -p download-only-288545                                                                                                                                │ download-only-288545 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -o=json --download-only -p download-only-503728 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2        │ download-only-503728 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete  │ -p download-only-503728                                                                                                                                │ download-only-503728 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -o=json --download-only -p download-only-059874 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 │ download-only-059874 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:37:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:37:50.306135  383500 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:50.306256  383500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:50.306261  383500 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:50.306265  383500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:50.306478  383500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:37:50.306958  383500 out.go:368] Setting JSON to true
	I1217 00:37:50.307772  383500 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4809,"bootTime":1765927061,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:37:50.307834  383500 start.go:143] virtualization: kvm guest
	I1217 00:37:50.309752  383500 out.go:99] [download-only-059874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:37:50.309935  383500 notify.go:221] Checking for updates...
	I1217 00:37:50.311078  383500 out.go:171] MINIKUBE_LOCATION=22140
	I1217 00:37:50.312266  383500 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:50.313474  383500 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:37:50.314609  383500 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:37:50.315798  383500 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:37:50.318019  383500 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:37:50.318336  383500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:50.349931  383500 out.go:99] Using the kvm2 driver based on user configuration
	I1217 00:37:50.349979  383500 start.go:309] selected driver: kvm2
	I1217 00:37:50.349988  383500 start.go:927] validating driver "kvm2" against <nil>
	I1217 00:37:50.350351  383500 start_flags.go:331] no existing cluster config was found, will generate one from the flags 
	I1217 00:37:50.350924  383500 start_flags.go:414] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 00:37:50.351099  383500 start_flags.go:998] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:37:50.351129  383500 cni.go:84] Creating CNI manager for ""
	I1217 00:37:50.351205  383500 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:37:50.351219  383500 start_flags.go:340] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:37:50.351291  383500 start.go:353] cluster config:
	{Name:download-only-059874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-059874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:37:50.351430  383500 iso.go:125] acquiring lock: {Name:mk68dcf288160193f263ebe6317eb4b124893df0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:37:50.352701  383500 out.go:99] Starting "download-only-059874" primary control-plane node in "download-only-059874" cluster
	I1217 00:37:50.352734  383500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:37:50.455169  383500 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:37:50.455207  383500 cache.go:65] Caching tarball of preloaded images
	I1217 00:37:50.455380  383500 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:37:50.457080  383500 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1217 00:37:50.457115  383500 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 00:37:50.568407  383500 preload.go:295] Got checksum from GCS API "7f0e1a4aaa3540d32279d04bf9728fae"
	I1217 00:37:50.568475  383500 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:7f0e1a4aaa3540d32279d04bf9728fae -> /home/jenkins/minikube-integration/22140-379084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-059874 host does not exist
	  To start a cluster, run: "minikube start -p download-only-059874"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-059874
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 00:38:00.792890  383008 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-159700 --alsologtostderr --binary-mirror http://127.0.0.1:36203 --driver=kvm2 
helpers_test.go:176: Cleaning up "binary-mirror-159700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-159700
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (82.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-519012 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-519012 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m21.18899574s)
helpers_test.go:176: Cleaning up "offline-docker-519012" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-519012
--- PASS: TestOffline (82.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-411941
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-411941: exit status 85 (254.138755ms)

                                                
                                                
-- stdout --
	* Profile "addons-411941" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-411941"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-411941
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-411941: exit status 85 (253.422052ms)

                                                
                                                
-- stdout --
	* Profile "addons-411941" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-411941"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/Setup (201.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-411941 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-411941 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m21.713795665s)
--- PASS: TestAddons/Setup (201.71s)

                                                
                                    
x
+
TestAddons/serial/Volcano (45.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 29.077956ms
addons_test.go:886: volcano-controller stabilized in 29.150134ms
addons_test.go:878: volcano-admission stabilized in 29.190536ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-chrzq" [4184e24e-8054-408a-82ed-9afdc92d345f] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00543908s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-wnp7x" [b404b872-bf53-4e05-89a1-4173b9274e3b] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005539956s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-h9djg" [1fa032db-855f-4a6e-86ce-18e41d953da9] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004371115s
addons_test.go:905: (dbg) Run:  kubectl --context addons-411941 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-411941 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-411941 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [9571f7fd-5ac1-4523-bba8-eb5fedaa122a] Pending
helpers_test.go:353: "test-job-nginx-0" [9571f7fd-5ac1-4523-bba8-eb5fedaa122a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [9571f7fd-5ac1-4523-bba8-eb5fedaa122a] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 17.008245082s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable volcano --alsologtostderr -v=1: (11.914928927s)
--- PASS: TestAddons/serial/Volcano (45.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-411941 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-411941 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.62s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-411941 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-411941 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4807b840-c3a9-4dd4-b2f4-ed6727b4064b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4807b840-c3a9-4dd4-b2f4-ed6727b4064b] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004966351s
addons_test.go:696: (dbg) Run:  kubectl --context addons-411941 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-411941 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-411941 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.62s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.557217ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-jmqz4" [15d2ddc6-eeda-49ab-96bf-18843cd24403] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008874599s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zp88v" [1460ed2c-21ac-4f80-8945-49e48e7313ae] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005625578s
addons_test.go:394: (dbg) Run:  kubectl --context addons-411941 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-411941 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-411941 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.300988031s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 ip
2025/12/17 00:42:46 [DEBUG] GET http://192.168.39.32:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.08s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 8.225787ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-411941
addons_test.go:334: (dbg) Run:  kubectl --context addons-411941 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-411941 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-411941 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-411941 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [37f80df4-00f9-462c-9cb8-99d5302263ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [37f80df4-00f9-462c-9cb8-99d5302263ae] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.229706162s
I1217 00:42:40.849164  383008 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-411941 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.32
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable ingress-dns --alsologtostderr -v=1: (1.346022714s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable ingress --alsologtostderr -v=1: (8.197571629s)
--- PASS: TestAddons/parallel/Ingress (22.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-fs47g" [8f816b47-37a0-4624-a75b-0697a3fb339f] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006695788s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable inspektor-gadget --alsologtostderr -v=1: (5.725097827s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.857615ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-pzfgq" [e5b5b2a0-4128-46c6-b691-aa6a4fb8c521] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006652566s
addons_test.go:465: (dbg) Run:  kubectl --context addons-411941 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 00:42:52.207948  383008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 00:42:52.217466  383008 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:42:52.217500  383008 kapi.go:107] duration metric: took 9.559435ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 9.574886ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-411941 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-411941 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [1cea8c1e-b533-44ba-acf7-df19e1bb6ae2] Pending
helpers_test.go:353: "task-pv-pod" [1cea8c1e-b533-44ba-acf7-df19e1bb6ae2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [1cea8c1e-b533-44ba-acf7-df19e1bb6ae2] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004639004s
addons_test.go:574: (dbg) Run:  kubectl --context addons-411941 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-411941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-411941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-411941 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-411941 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-411941 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-411941 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [64a57d59-349a-4557-b692-defdaed26371] Pending
helpers_test.go:353: "task-pv-pod-restore" [64a57d59-349a-4557-b692-defdaed26371] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.004196726s
addons_test.go:616: (dbg) Run:  kubectl --context addons-411941 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-411941 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-411941 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.723366724s)
--- PASS: TestAddons/parallel/CSI (45.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-411941 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-411941 --alsologtostderr -v=1: (1.321884906s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-sfth7" [a211dd2e-f95e-4f62-b435-f225cc4f2dfc] Pending
helpers_test.go:353: "headlamp-dfcdc64b-sfth7" [a211dd2e-f95e-4f62-b435-f225cc4f2dfc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-sfth7" [a211dd2e-f95e-4f62-b435-f225cc4f2dfc] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.0048425s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable headlamp --alsologtostderr -v=1: (5.946389703s)
--- PASS: TestAddons/parallel/Headlamp (25.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-4z7pp" [d6914414-e483-4a0a-8120-b4f9743502c3] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006013347s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.72s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-411941 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-411941 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [2ab713b8-7ba6-491b-8c5f-f3b47748b77f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [2ab713b8-7ba6-491b-8c5f-f3b47748b77f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [2ab713b8-7ba6-491b-8c5f-f3b47748b77f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005731484s
addons_test.go:969: (dbg) Run:  kubectl --context addons-411941 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 ssh "cat /opt/local-path-provisioner/pvc-370b8f78-ea30-4991-9823-472ae0b7a802_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-411941 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-411941 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.839861778s)
--- PASS: TestAddons/parallel/LocalPath (55.72s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-hkhh2" [ce78ae48-92ff-4d17-b519-57ceed0782d8] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003654272s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-5qvnp" [c2a1baff-219e-4b0f-88bd-26d50b91592e] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.012774075s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-411941 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-411941 addons disable yakd --alsologtostderr -v=1: (5.774236866s)
--- PASS: TestAddons/parallel/Yakd (10.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-411941
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-411941: (12.116264812s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-411941
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-411941
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-411941
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (70.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-099482 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1217 01:28:10.807231  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-099482 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m9.314614987s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-099482 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-099482 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-099482 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-099482" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-099482
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-099482: (1.115048837s)
--- PASS: TestCertOptions (70.90s)

                                                
                                    
x
+
TestCertExpiration (321.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-868286 --memory=3072 --cert-expiration=3m --driver=kvm2 
E1217 01:27:19.415680  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-868286 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m16.356885851s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-868286 --memory=3072 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-868286 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m4.795420743s)
helpers_test.go:176: Cleaning up "cert-expiration-868286" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-868286
--- PASS: TestCertExpiration (321.98s)

                                                
                                    
x
+
TestDockerFlags (78.87s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-147766 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-147766 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m17.624701852s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-147766 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-147766 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-147766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-147766
--- PASS: TestDockerFlags (78.87s)

                                                
                                    
x
+
TestForceSystemdFlag (60.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-558142 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-558142 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (59.640734288s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-558142 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-558142" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-558142
--- PASS: TestForceSystemdFlag (60.70s)

                                                
                                    
x
+
TestForceSystemdEnv (60.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-550036 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-550036 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (59.029180132s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-550036 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-550036" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-550036
--- PASS: TestForceSystemdEnv (60.17s)

                                                
                                    
x
+
TestErrorSpam/setup (39.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-388899 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-388899 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-388899 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-388899 --driver=kvm2 : (39.851010439s)
--- PASS: TestErrorSpam/setup (39.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 pause
--- PASS: TestErrorSpam/pause (1.27s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (14.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 stop: (11.618996732s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 stop: (1.255508649s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-388899 --log_dir /tmp/nospam-388899 stop: (1.536621275s)
--- PASS: TestErrorSpam/stop (14.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22140-379084/.minikube/files/etc/test/nested/copy/383008/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-989491 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-989491 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m20.224423254s)
--- PASS: TestFunctional/serial/StartWithProxy (80.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 00:46:14.792720  383008 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-989491 --alsologtostderr -v=8
E1217 00:46:24.060052  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.066478  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.077826  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.099192  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.140552  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.222028  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.383582  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:24.705303  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:25.347414  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:26.629570  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:29.191011  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:34.312596  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:44.554676  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-989491 --alsologtostderr -v=8: (49.172483764s)
functional_test.go:678: soft start took 49.173171256s for "functional-989491" cluster.
I1217 00:47:03.965583  383008 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (49.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-989491 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cache add registry.k8s.io/pause:3.3
E1217 00:47:05.036966  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-989491 /tmp/TestFunctionalserialCacheCmdcacheadd_local524818038/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cache add minikube-local-cache-test:functional-989491
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-989491 cache add minikube-local-cache-test:functional-989491: (1.165182804s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cache delete minikube-local-cache-test:functional-989491
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-989491
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (182.37786ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 kubectl -- --context functional-989491 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-989491 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-989491 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:47:45.999768  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-989491 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.079427249s)
functional_test.go:776: restart took 55.079575317s for "functional-989491" cluster.
I1217 00:48:04.692618  383008 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (55.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-989491 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 logs
--- PASS: TestFunctional/serial/LogsCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 logs --file /tmp/TestFunctionalserialLogsFileCmd3654372472/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-989491 logs --file /tmp/TestFunctionalserialLogsFileCmd3654372472/001/logs.txt: (1.002127808s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-989491 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-989491
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-989491: exit status 115 (244.371503ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.216:31202 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-989491 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 config get cpus: exit status 14 (75.468124ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 config get cpus: exit status 14 (65.870965ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-989491 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-989491 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 388605: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-989491 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-989491 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (123.791813ms)

                                                
                                                
-- stdout --
	* [functional-989491] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:48:13.204467  388548 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:48:13.204585  388548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:48:13.204594  388548 out.go:374] Setting ErrFile to fd 2...
	I1217 00:48:13.204598  388548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:48:13.204784  388548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:48:13.205258  388548 out.go:368] Setting JSON to false
	I1217 00:48:13.206132  388548 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5432,"bootTime":1765927061,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:48:13.206190  388548 start.go:143] virtualization: kvm guest
	I1217 00:48:13.208153  388548 out.go:179] * [functional-989491] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:48:13.209344  388548 out.go:179]   - MINIKUBE_LOCATION=22140
	I1217 00:48:13.209421  388548 notify.go:221] Checking for updates...
	I1217 00:48:13.211480  388548 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:48:13.212645  388548 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:48:13.213797  388548 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:48:13.214867  388548 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:48:13.215937  388548 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:48:13.217393  388548 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 00:48:13.217967  388548 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:48:13.250600  388548 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:48:13.252978  388548 start.go:309] selected driver: kvm2
	I1217 00:48:13.253003  388548 start.go:927] validating driver "kvm2" against &{Name:functional-989491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-989491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:48:13.253217  388548 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:48:13.255766  388548 out.go:203] 
	W1217 00:48:13.256937  388548 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:48:13.257919  388548 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-989491 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-989491 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-989491 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (120.061858ms)

                                                
                                                
-- stdout --
	* [functional-989491] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:48:13.078830  388526 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:48:13.078977  388526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:48:13.078988  388526 out.go:374] Setting ErrFile to fd 2...
	I1217 00:48:13.078993  388526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:48:13.079335  388526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:48:13.079757  388526 out.go:368] Setting JSON to false
	I1217 00:48:13.080687  388526 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5432,"bootTime":1765927061,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:48:13.080753  388526 start.go:143] virtualization: kvm guest
	I1217 00:48:13.082302  388526 out.go:179] * [functional-989491] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:48:13.083391  388526 notify.go:221] Checking for updates...
	I1217 00:48:13.083410  388526 out.go:179]   - MINIKUBE_LOCATION=22140
	I1217 00:48:13.084511  388526 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:48:13.085672  388526 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:48:13.087065  388526 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:48:13.088179  388526 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:48:13.089237  388526 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:48:13.090786  388526 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 00:48:13.091316  388526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:48:13.129025  388526 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:48:13.130294  388526 start.go:309] selected driver: kvm2
	I1217 00:48:13.130313  388526 start.go:927] validating driver "kvm2" against &{Name:functional-989491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-989491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:48:13.130425  388526 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:48:13.132331  388526 out.go:203] 
	W1217 00:48:13.133352  388526 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:48:13.134268  388526 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-989491 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-989491 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-5mmbv" [414c83e0-b8dc-43da-a5e7-1d60f15eadf1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-5mmbv" [414c83e0-b8dc-43da-a5e7-1d60f15eadf1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 33.004362472s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.216:30210
functional_test.go:1680: http://192.168.39.216:30210: success! body:
Request served by hello-node-connect-7d85dfc575-5mmbv

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.216:30210
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (33.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [0dda951f-6109-453e-9033-808b2894a8c6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004573296s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-989491 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-989491 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-989491 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-989491 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:48:17.192197  383008 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [498bf70f-234b-4d20-8fd3-775ab1a30a89] Pending
helpers_test.go:353: "sp-pod" [498bf70f-234b-4d20-8fd3-775ab1a30a89] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [498bf70f-234b-4d20-8fd3-775ab1a30a89] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004686527s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-989491 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-989491 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-989491 delete -f testdata/storage-provisioner/pod.yaml: (1.060390539s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-989491 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:48:39.580876  383008 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ae265296-fc08-4fdd-a03c-35a5fdb03a1a] Pending
helpers_test.go:353: "sp-pod" [ae265296-fc08-4fdd-a03c-35a5fdb03a1a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [ae265296-fc08-4fdd-a03c-35a5fdb03a1a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006897375s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-989491 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh -n functional-989491 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cp functional-989491:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2122354310/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh -n functional-989491 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh -n functional-989491 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (42.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-989491 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-8fjlw" [3b00e364-37b0-482c-bf4b-799032bfdf13] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/12/17 00:48:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "mysql-6bcdcbc558-8fjlw" [3b00e364-37b0-482c-bf4b-799032bfdf13] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 34.006257971s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;": exit status 1 (169.502054ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:48:59.102759  383008 retry.go:31] will retry after 1.443634107s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;": exit status 1 (189.01418ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:49:00.736297  383008 retry.go:31] will retry after 890.629485ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;": exit status 1 (215.170818ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:49:01.842865  383008 retry.go:31] will retry after 1.910017706s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;": exit status 1 (131.591856ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:49:03.885866  383008 retry.go:31] will retry after 3.054352728s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-989491 exec mysql-6bcdcbc558-8fjlw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (42.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/383008/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /etc/test/nested/copy/383008/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/383008.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /etc/ssl/certs/383008.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/383008.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /usr/share/ca-certificates/383008.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3830082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /etc/ssl/certs/3830082.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3830082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /usr/share/ca-certificates/3830082.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-989491 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh "sudo systemctl is-active crio": exit status 1 (189.897878ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-989491 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-989491 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-9xp5w" [a87b15fa-9f32-497c-9e99-e36834b5df32] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-9xp5w" [a87b15fa-9f32-497c-9e99-e36834b5df32] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006585662s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "277.150136ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "73.4837ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdany-port3740401084/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765932491966787196" to /tmp/TestFunctionalparallelMountCmdany-port3740401084/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765932491966787196" to /tmp/TestFunctionalparallelMountCmdany-port3740401084/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765932491966787196" to /tmp/TestFunctionalparallelMountCmdany-port3740401084/001/test-1765932491966787196
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.791948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:48:12.140895  383008 retry.go:31] will retry after 634.95364ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:48 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:48 test-1765932491966787196
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh cat /mount-9p/test-1765932491966787196
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-989491 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [3c1807a1-f4b5-4e1b-a8f9-56ab113ead61] Pending
helpers_test.go:353: "busybox-mount" [3c1807a1-f4b5-4e1b-a8f9-56ab113ead61] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [3c1807a1-f4b5-4e1b-a8f9-56ab113ead61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [3c1807a1-f4b5-4e1b-a8f9-56ab113ead61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004078033s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-989491 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdany-port3740401084/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "270.225473ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.565232ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdspecific-port3365409695/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (190.50525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:48:19.363978  383008 retry.go:31] will retry after 736.544231ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdspecific-port3365409695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh "sudo umount -f /mount-9p": exit status 1 (198.769039ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-989491 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdspecific-port3365409695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 service list -o json
functional_test.go:1504: Took "453.764634ms" to run "out/minikube-linux-amd64 -p functional-989491 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2656619911/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2656619911/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2656619911/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T" /mount1: exit status 1 (215.408202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:48:21.097194  383008 retry.go:31] will retry after 631.533855ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-989491 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2656619911/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2656619911/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-989491 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2656619911/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.216:30762
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.216:30762
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-989491 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-989491
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-989491
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-989491 image ls --format short --alsologtostderr:
I1217 00:48:31.147068  389471 out.go:360] Setting OutFile to fd 1 ...
I1217 00:48:31.147362  389471 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:31.147372  389471 out.go:374] Setting ErrFile to fd 2...
I1217 00:48:31.147377  389471 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:31.147559  389471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:48:31.148165  389471 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:31.148260  389471 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:31.150488  389471 ssh_runner.go:195] Run: systemctl --version
I1217 00:48:31.153177  389471 main.go:143] libmachine: domain functional-989491 has defined MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:31.153738  389471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:1f:d6", ip: ""} in network mk-functional-989491: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:09 +0000 UTC Type:0 Mac:52:54:00:99:1f:d6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-989491 Clientid:01:52:54:00:99:1f:d6}
I1217 00:48:31.153770  389471 main.go:143] libmachine: domain functional-989491 has defined IP address 192.168.39.216 and MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:31.153971  389471 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-989491/id_rsa Username:docker}
I1217 00:48:31.263924  389471 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-989491 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ localhost/my-image                          │ functional-989491 │ 7815883c17333 │ 1.24MB │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-989491 │ 8629fecf043c4 │ 30B    │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ docker.io/kicbase/echo-server               │ functional-989491 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-989491 image ls --format table --alsologtostderr:
I1217 00:48:37.616089  389565 out.go:360] Setting OutFile to fd 1 ...
I1217 00:48:37.616242  389565 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:37.616253  389565 out.go:374] Setting ErrFile to fd 2...
I1217 00:48:37.616259  389565 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:37.616488  389565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:48:37.617243  389565 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:37.617347  389565 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:37.619924  389565 ssh_runner.go:195] Run: systemctl --version
I1217 00:48:37.622177  389565 main.go:143] libmachine: domain functional-989491 has defined MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:37.622638  389565 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:1f:d6", ip: ""} in network mk-functional-989491: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:09 +0000 UTC Type:0 Mac:52:54:00:99:1f:d6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-989491 Clientid:01:52:54:00:99:1f:d6}
I1217 00:48:37.622666  389565 main.go:143] libmachine: domain functional-989491 has defined IP address 192.168.39.216 and MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:37.622842  389565 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-989491/id_rsa Username:docker}
I1217 00:48:37.714593  389565 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-989491 image ls --format json --alsologtostderr:
[{"id":"7815883c173330d65b2ec93983d93e6c9f6c3128930a51ada4bd779354b4611e","repoDigests":[],"repoTags":["localhost/my-image:functional-989491"],"size":"1240000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-989491","docker.io/kicbase/echo-server:latest"],"si
ze":"4940000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"8629fecf043c412150623ee14623365d106d31dc71436910648c0cd1662c3c47","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-989491"],"size":"30"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde9
8e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-989491 image ls --format json --alsologtostderr:
I1217 00:48:37.397145  389554 out.go:360] Setting OutFile to fd 1 ...
I1217 00:48:37.397259  389554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:37.397270  389554 out.go:374] Setting ErrFile to fd 2...
I1217 00:48:37.397276  389554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:37.397472  389554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:48:37.398059  389554 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:37.398148  389554 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:37.400371  389554 ssh_runner.go:195] Run: systemctl --version
I1217 00:48:37.402960  389554 main.go:143] libmachine: domain functional-989491 has defined MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:37.403418  389554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:1f:d6", ip: ""} in network mk-functional-989491: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:09 +0000 UTC Type:0 Mac:52:54:00:99:1f:d6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-989491 Clientid:01:52:54:00:99:1f:d6}
I1217 00:48:37.403447  389554 main.go:143] libmachine: domain functional-989491 has defined IP address 192.168.39.216 and MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:37.403578  389554 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-989491/id_rsa Username:docker}
I1217 00:48:37.493403  389554 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-989491 image ls --format yaml --alsologtostderr:
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-989491
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 8629fecf043c412150623ee14623365d106d31dc71436910648c0cd1662c3c47
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-989491
size: "30"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-989491 image ls --format yaml --alsologtostderr:
I1217 00:48:31.376108  389498 out.go:360] Setting OutFile to fd 1 ...
I1217 00:48:31.376406  389498 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:31.376416  389498 out.go:374] Setting ErrFile to fd 2...
I1217 00:48:31.376421  389498 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:31.376634  389498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:48:31.377214  389498 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:31.377306  389498 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:31.379449  389498 ssh_runner.go:195] Run: systemctl --version
I1217 00:48:31.381587  389498 main.go:143] libmachine: domain functional-989491 has defined MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:31.382012  389498 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:1f:d6", ip: ""} in network mk-functional-989491: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:09 +0000 UTC Type:0 Mac:52:54:00:99:1f:d6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-989491 Clientid:01:52:54:00:99:1f:d6}
I1217 00:48:31.382041  389498 main.go:143] libmachine: domain functional-989491 has defined IP address 192.168.39.216 and MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:31.382173  389498 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-989491/id_rsa Username:docker}
I1217 00:48:31.477552  389498 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-989491 ssh pgrep buildkitd: exit status 1 (161.116657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image build -t localhost/my-image:functional-989491 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-989491 image build -t localhost/my-image:functional-989491 testdata/build --alsologtostderr: (5.457490965s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-989491 image build -t localhost/my-image:functional-989491 testdata/build --alsologtostderr:
I1217 00:48:31.731146  389519 out.go:360] Setting OutFile to fd 1 ...
I1217 00:48:31.731282  389519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:31.731291  389519 out.go:374] Setting ErrFile to fd 2...
I1217 00:48:31.731296  389519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:48:31.731469  389519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:48:31.732038  389519 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:31.732693  389519 config.go:182] Loaded profile config "functional-989491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:48:31.734743  389519 ssh_runner.go:195] Run: systemctl --version
I1217 00:48:31.736833  389519 main.go:143] libmachine: domain functional-989491 has defined MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:31.737240  389519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:1f:d6", ip: ""} in network mk-functional-989491: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:09 +0000 UTC Type:0 Mac:52:54:00:99:1f:d6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-989491 Clientid:01:52:54:00:99:1f:d6}
I1217 00:48:31.737263  389519 main.go:143] libmachine: domain functional-989491 has defined IP address 192.168.39.216 and MAC address 52:54:00:99:1f:d6 in network mk-functional-989491
I1217 00:48:31.737418  389519 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-989491/id_rsa Username:docker}
I1217 00:48:31.819677  389519 build_images.go:162] Building image from path: /tmp/build.232137850.tar
I1217 00:48:31.819763  389519 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:48:31.832864  389519 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.232137850.tar
I1217 00:48:31.838121  389519 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.232137850.tar: stat -c "%s %y" /var/lib/minikube/build/build.232137850.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.232137850.tar': No such file or directory
I1217 00:48:31.838162  389519 ssh_runner.go:362] scp /tmp/build.232137850.tar --> /var/lib/minikube/build/build.232137850.tar (3072 bytes)
I1217 00:48:31.878560  389519 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.232137850
I1217 00:48:31.891434  389519 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.232137850 -xf /var/lib/minikube/build/build.232137850.tar
I1217 00:48:31.902321  389519 docker.go:361] Building image: /var/lib/minikube/build/build.232137850
I1217 00:48:31.902397  389519 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-989491 /var/lib/minikube/build/build.232137850
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 1.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.7s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.9s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 2.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:7815883c173330d65b2ec93983d93e6c9f6c3128930a51ada4bd779354b4611e done
#8 naming to localhost/my-image:functional-989491 done
#8 DONE 0.1s
I1217 00:48:37.089326  389519 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-989491 /var/lib/minikube/build/build.232137850: (5.186896799s)
I1217 00:48:37.089409  389519 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.232137850
I1217 00:48:37.105768  389519 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.232137850.tar
I1217 00:48:37.121727  389519 build_images.go:218] Built localhost/my-image:functional-989491 from /tmp/build.232137850.tar
I1217 00:48:37.121788  389519 build_images.go:134] succeeded building to: functional-989491
I1217 00:48:37.121793  389519 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.100606125s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-989491
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-989491 docker-env) && out/minikube-linux-amd64 status -p functional-989491"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-989491 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image load --daemon kicbase/echo-server:functional-989491 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image load --daemon kicbase/echo-server:functional-989491 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-989491
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image load --daemon kicbase/echo-server:functional-989491 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image save kicbase/echo-server:functional-989491 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image rm kicbase/echo-server:functional-989491 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-989491
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-989491 image save --daemon kicbase/echo-server:functional-989491 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-989491
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-989491
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-989491
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-989491
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22140-379084/.minikube/files/etc/test/nested/copy/383008/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-216033 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-216033 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m14.518152447s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (50.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1217 00:50:22.728414  383008 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-216033 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-216033 --alsologtostderr -v=8: (50.458898215s)
functional_test.go:678: soft start took 50.459420931s for "functional-216033" cluster.
I1217 00:51:13.187699  383008 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (50.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-216033 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1976471615/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cache add minikube-local-cache-test:functional-216033
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-216033 cache add minikube-local-cache-test:functional-216033: (1.181403464s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cache delete minikube-local-cache-test:functional-216033
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-216033
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (173.030409ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 kubectl -- --context functional-216033 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-216033 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (54.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-216033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:51:24.056293  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:51:51.763231  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-216033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.192603301s)
functional_test.go:776: restart took 54.192725956s for "functional-216033" cluster.
I1217 00:52:13.068314  383008 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (54.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-216033 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1214050135/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-216033 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-216033
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-216033: exit status 115 (242.577405ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.60:31847 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-216033 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 config get cpus: exit status 14 (73.42042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 config get cpus: exit status 14 (78.073221ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (31.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-216033 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-216033 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 392326: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (31.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-216033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-216033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: exit status 23 (116.867309ms)

                                                
                                                
-- stdout --
	* [functional-216033] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:52:23.191394  391707 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:52:23.191675  391707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:23.191687  391707 out.go:374] Setting ErrFile to fd 2...
	I1217 00:52:23.191693  391707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:23.191916  391707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:52:23.192410  391707 out.go:368] Setting JSON to false
	I1217 00:52:23.193282  391707 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5682,"bootTime":1765927061,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:52:23.193344  391707 start.go:143] virtualization: kvm guest
	I1217 00:52:23.195399  391707 out.go:179] * [functional-216033] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:52:23.196644  391707 out.go:179]   - MINIKUBE_LOCATION=22140
	I1217 00:52:23.196634  391707 notify.go:221] Checking for updates...
	I1217 00:52:23.198856  391707 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:52:23.200114  391707 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:52:23.201226  391707 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:52:23.202350  391707 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:52:23.203564  391707 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:52:23.205201  391707 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:23.205717  391707 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:52:23.238956  391707 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:52:23.240407  391707 start.go:309] selected driver: kvm2
	I1217 00:52:23.240443  391707 start.go:927] validating driver "kvm2" against &{Name:functional-216033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-216033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:52:23.240571  391707 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:52:23.242864  391707 out.go:203] 
	W1217 00:52:23.243798  391707 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:52:23.244999  391707 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-216033 --dry-run --alsologtostderr -v=1 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-216033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-216033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: exit status 23 (124.314637ms)

                                                
                                                
-- stdout --
	* [functional-216033] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:52:23.435221  391739 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:52:23.435337  391739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:23.435345  391739 out.go:374] Setting ErrFile to fd 2...
	I1217 00:52:23.435352  391739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:23.435684  391739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:52:23.436156  391739 out.go:368] Setting JSON to false
	I1217 00:52:23.437027  391739 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5682,"bootTime":1765927061,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:52:23.437089  391739 start.go:143] virtualization: kvm guest
	I1217 00:52:23.439156  391739 out.go:179] * [functional-216033] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:52:23.440411  391739 out.go:179]   - MINIKUBE_LOCATION=22140
	I1217 00:52:23.440414  391739 notify.go:221] Checking for updates...
	I1217 00:52:23.442028  391739 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:52:23.443403  391739 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	I1217 00:52:23.444552  391739 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	I1217 00:52:23.445754  391739 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:52:23.446955  391739 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:52:23.448482  391739 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:23.449029  391739 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:52:23.486665  391739 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:52:23.487939  391739 start.go:309] selected driver: kvm2
	I1217 00:52:23.487960  391739 start.go:927] validating driver "kvm2" against &{Name:functional-216033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-216033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1217 00:52:23.488082  391739 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:52:23.490278  391739 out.go:203] 
	W1217 00:52:23.491647  391739 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:52:23.492790  391739 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (23.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-216033 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-216033 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-gqbpq" [fca05796-05a8-4e33-b86f-23b4e59b049c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-gqbpq" [fca05796-05a8-4e33-b86f-23b4e59b049c] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.00440823s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.60:31726
functional_test.go:1680: http://192.168.39.60:31726: success! body:
Request served by hello-node-connect-9f67c86d4-gqbpq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.60:31726
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (23.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (65.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [2a0aeadc-9465-4808-b3ab-5825f5a0a041] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004854902s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-216033 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-216033 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-216033 get pvc myclaim -o=json
I1217 00:52:32.361549  383008 retry.go:31] will retry after 1.639379854s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:647650cb-ac16-4cea-8e76-d209c728913b ResourceVersion:836 Generation:0 CreationTimestamp:2025-12-17 00:52:32 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001de40a0 VolumeMode:0xc001de40b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-216033 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-216033 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:52:34.205852  383008 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c1c30bb6-97e3-487a-a1c7-10921e227af1] Pending
helpers_test.go:353: "sp-pod" [c1c30bb6-97e3-487a-a1c7-10921e227af1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [c1c30bb6-97e3-487a-a1c7-10921e227af1] Running
E1217 00:53:21.060423  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 51.003292694s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-216033 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-216033 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-216033 delete -f testdata/storage-provisioner/pod.yaml: (1.094878829s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-216033 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:53:26.563431  383008 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [cddf8a93-cc53-4436-aa8c-5a30e0340d78] Pending
helpers_test.go:353: "sp-pod" [cddf8a93-cc53-4436-aa8c-5a30e0340d78] Running
E1217 00:53:31.302725  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004367649s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-216033 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (65.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh -n functional-216033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cp functional-216033:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1783859833/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh -n functional-216033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh -n functional-216033 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (40.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-216033 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-gp9xd" [4b466be8-07b5-4d45-b6be-938bd6043003] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-gp9xd" [4b466be8-07b5-4d45-b6be-938bd6043003] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 30.009797878s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;": exit status 1 (322.846775ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:52:54.022447  383008 retry.go:31] will retry after 1.079093308s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;": exit status 1 (343.169919ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:52:55.445504  383008 retry.go:31] will retry after 2.020550736s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;": exit status 1 (267.481076ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:52:57.734850  383008 retry.go:31] will retry after 1.961641766s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;": exit status 1 (295.417416ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:52:59.992975  383008 retry.go:31] will retry after 4.004979322s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-216033 exec mysql-7d7b65bc95-gp9xd -- mysql -ppassword -e "show databases;"
E1217 00:53:10.806575  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:10.812941  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:10.824320  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:10.845735  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:10.887233  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:10.968728  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:11.130408  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:11.451738  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:12.093931  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:13.376108  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:15.938108  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (40.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/383008/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /etc/test/nested/copy/383008/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/383008.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /etc/ssl/certs/383008.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/383008.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /usr/share/ca-certificates/383008.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3830082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /etc/ssl/certs/3830082.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3830082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /usr/share/ca-certificates/3830082.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-216033 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh "sudo systemctl is-active crio": exit status 1 (180.120033ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-216033 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-216033 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-zjgg6" [ea9f4387-cfbc-4bc0-93d3-327201ef078b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-zjgg6" [ea9f4387-cfbc-4bc0-93d3-327201ef078b] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005885642s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "313.875497ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.997176ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2282669987/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765932740127372731" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2282669987/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765932740127372731" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2282669987/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765932740127372731" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2282669987/001/test-1765932740127372731
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.604702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:52:20.321342  383008 retry.go:31] will retry after 403.807734ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:52 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:52 test-1765932740127372731
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh cat /mount-9p/test-1765932740127372731
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-216033 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [405aceb4-d47c-47a0-b9a9-13c990200d57] Pending
helpers_test.go:353: "busybox-mount" [405aceb4-d47c-47a0-b9a9-13c990200d57] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [405aceb4-d47c-47a0-b9a9-13c990200d57] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [405aceb4-d47c-47a0-b9a9-13c990200d57] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005791997s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-216033 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2282669987/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "256.802508ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.801109ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-216033 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-216033
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-216033
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-216033 image ls --format short --alsologtostderr:
I1217 00:52:55.602345  392512 out.go:360] Setting OutFile to fd 1 ...
I1217 00:52:55.602633  392512 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:52:55.602644  392512 out.go:374] Setting ErrFile to fd 2...
I1217 00:52:55.602649  392512 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:52:55.602929  392512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:52:55.603627  392512 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:52:55.603746  392512 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:52:55.606157  392512 ssh_runner.go:195] Run: systemctl --version
I1217 00:52:55.608488  392512 main.go:143] libmachine: domain functional-216033 has defined MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:52:55.608983  392512 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:19:1c", ip: ""} in network mk-functional-216033: {Iface:virbr1 ExpiryTime:2025-12-17 01:49:22 +0000 UTC Type:0 Mac:52:54:00:89:19:1c Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-216033 Clientid:01:52:54:00:89:19:1c}
I1217 00:52:55.609023  392512 main.go:143] libmachine: domain functional-216033 has defined IP address 192.168.39.60 and MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:52:55.609185  392512 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-216033/id_rsa Username:docker}
I1217 00:52:55.729762  392512 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-216033 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ public.ecr.aws/docker/library/mysql         │ 8.4               │ 20d0be4ee4524 │ 785MB  │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ localhost/my-image                          │ functional-216033 │ 3c2f443c148bf │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-216033 │ 8629fecf043c4 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/kicbase/echo-server               │ functional-216033 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-216033 image ls --format table --alsologtostderr:
I1217 00:53:00.534007  392610 out.go:360] Setting OutFile to fd 1 ...
I1217 00:53:00.534414  392610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:53:00.534429  392610 out.go:374] Setting ErrFile to fd 2...
I1217 00:53:00.534436  392610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:53:00.534951  392610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:53:00.536026  392610 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:53:00.536132  392610 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:53:00.538141  392610 ssh_runner.go:195] Run: systemctl --version
I1217 00:53:00.540137  392610 main.go:143] libmachine: domain functional-216033 has defined MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:53:00.540576  392610 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:19:1c", ip: ""} in network mk-functional-216033: {Iface:virbr1 ExpiryTime:2025-12-17 01:49:22 +0000 UTC Type:0 Mac:52:54:00:89:19:1c Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-216033 Clientid:01:52:54:00:89:19:1c}
I1217 00:53:00.540614  392610 main.go:143] libmachine: domain functional-216033 has defined IP address 192.168.39.60 and MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:53:00.540782  392610 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-216033/id_rsa Username:docker}
I1217 00:53:00.625486  392610 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2025/12/17 00:53:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-216033 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9
b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8629fecf043c412150623ee14623365d106d31dc71436910648c0cd1662c3c47","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-216033"],"size":"30"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDiges
ts":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"3c2f443c148bfe4e7db1ffdf70ab4903804b9ffbfc582e5daa1f9198dda35b96","repoDigests":[],"repoTags":["localhost/my-image:functional-216033"],"size":"1240000"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-216033","docker.io/kicbase/echo-server:latest"],"size":"4940000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-216033 image ls --format json --alsologtostderr:
I1217 00:53:00.357500  392599 out.go:360] Setting OutFile to fd 1 ...
I1217 00:53:00.357816  392599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:53:00.357827  392599 out.go:374] Setting ErrFile to fd 2...
I1217 00:53:00.357832  392599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:53:00.358129  392599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:53:00.358707  392599 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:53:00.358815  392599 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:53:00.361090  392599 ssh_runner.go:195] Run: systemctl --version
I1217 00:53:00.363449  392599 main.go:143] libmachine: domain functional-216033 has defined MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:53:00.363844  392599 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:19:1c", ip: ""} in network mk-functional-216033: {Iface:virbr1 ExpiryTime:2025-12-17 01:49:22 +0000 UTC Type:0 Mac:52:54:00:89:19:1c Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-216033 Clientid:01:52:54:00:89:19:1c}
I1217 00:53:00.363871  392599 main.go:143] libmachine: domain functional-216033 has defined IP address 192.168.39.60 and MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:53:00.364119  392599 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-216033/id_rsa Username:docker}
I1217 00:53:00.447039  392599 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-216033 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-216033
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: 8629fecf043c412150623ee14623365d106d31dc71436910648c0cd1662c3c47
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-216033
size: "30"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-216033 image ls --format yaml --alsologtostderr:
I1217 00:52:55.866443  392522 out.go:360] Setting OutFile to fd 1 ...
I1217 00:52:55.866570  392522 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:52:55.866579  392522 out.go:374] Setting ErrFile to fd 2...
I1217 00:52:55.866584  392522 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:52:55.866844  392522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:52:55.867422  392522 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:52:55.867526  392522 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:52:55.869945  392522 ssh_runner.go:195] Run: systemctl --version
I1217 00:52:55.872466  392522 main.go:143] libmachine: domain functional-216033 has defined MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:52:55.872898  392522 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:19:1c", ip: ""} in network mk-functional-216033: {Iface:virbr1 ExpiryTime:2025-12-17 01:49:22 +0000 UTC Type:0 Mac:52:54:00:89:19:1c Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-216033 Clientid:01:52:54:00:89:19:1c}
I1217 00:52:55.872961  392522 main.go:143] libmachine: domain functional-216033 has defined IP address 192.168.39.60 and MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:52:55.873138  392522 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-216033/id_rsa Username:docker}
I1217 00:52:55.995931  392522 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh pgrep buildkitd: exit status 1 (232.047953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image build -t localhost/my-image:functional-216033 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-216033 image build -t localhost/my-image:functional-216033 testdata/build --alsologtostderr: (3.791477474s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-216033 image build -t localhost/my-image:functional-216033 testdata/build --alsologtostderr:
I1217 00:52:56.379592  392543 out.go:360] Setting OutFile to fd 1 ...
I1217 00:52:56.379847  392543 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:52:56.379855  392543 out.go:374] Setting ErrFile to fd 2...
I1217 00:52:56.379859  392543 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:52:56.380083  392543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
I1217 00:52:56.380659  392543 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:52:56.381464  392543 config.go:182] Loaded profile config "functional-216033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:52:56.383770  392543 ssh_runner.go:195] Run: systemctl --version
I1217 00:52:56.386376  392543 main.go:143] libmachine: domain functional-216033 has defined MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:52:56.387042  392543 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:19:1c", ip: ""} in network mk-functional-216033: {Iface:virbr1 ExpiryTime:2025-12-17 01:49:22 +0000 UTC Type:0 Mac:52:54:00:89:19:1c Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-216033 Clientid:01:52:54:00:89:19:1c}
I1217 00:52:56.387083  392543 main.go:143] libmachine: domain functional-216033 has defined IP address 192.168.39.60 and MAC address 52:54:00:89:19:1c in network mk-functional-216033
I1217 00:52:56.387259  392543 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/functional-216033/id_rsa Username:docker}
I1217 00:52:56.518418  392543 build_images.go:162] Building image from path: /tmp/build.3417074772.tar
I1217 00:52:56.518499  392543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:52:56.561529  392543 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3417074772.tar
I1217 00:52:56.577260  392543 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3417074772.tar: stat -c "%s %y" /var/lib/minikube/build/build.3417074772.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3417074772.tar': No such file or directory
I1217 00:52:56.577309  392543 ssh_runner.go:362] scp /tmp/build.3417074772.tar --> /var/lib/minikube/build/build.3417074772.tar (3072 bytes)
I1217 00:52:56.687532  392543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3417074772
I1217 00:52:56.726069  392543 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3417074772 -xf /var/lib/minikube/build/build.3417074772.tar
I1217 00:52:56.758500  392543 docker.go:361] Building image: /var/lib/minikube/build/build.3417074772
I1217 00:52:56.758596  392543 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-216033 /var/lib/minikube/build/build.3417074772
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.3s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 writing image sha256:3c2f443c148bfe4e7db1ffdf70ab4903804b9ffbfc582e5daa1f9198dda35b96 done
#8 naming to localhost/my-image:functional-216033 done
#8 DONE 0.1s
I1217 00:53:00.069056  392543 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-216033 /var/lib/minikube/build/build.3417074772: (3.310427038s)
I1217 00:53:00.069134  392543 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3417074772
I1217 00:53:00.083687  392543 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3417074772.tar
I1217 00:53:00.100698  392543 build_images.go:218] Built localhost/my-image:functional-216033 from /tmp/build.3417074772.tar
I1217 00:53:00.100747  392543 build_images.go:134] succeeded building to: functional-216033
I1217 00:53:00.100754  392543 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-216033
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image load --daemon kicbase/echo-server:functional-216033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-216033 docker-env) && out/minikube-linux-amd64 status -p functional-216033"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-216033 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image load --daemon kicbase/echo-server:functional-216033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-216033
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image load --daemon kicbase/echo-server:functional-216033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image save kicbase/echo-server:functional-216033 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image rm kicbase/echo-server:functional-216033 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-216033
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 image save --daemon kicbase/echo-server:functional-216033 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-216033
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3772595814/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.138375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:52:28.382604  383008 retry.go:31] will retry after 280.477146ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3772595814/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh "sudo umount -f /mount-9p": exit status 1 (218.428127ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-216033 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3772595814/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 service list -o json
functional_test.go:1504: Took "324.458707ms" to run "out/minikube-linux-amd64 -p functional-216033 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.60:30622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1101652931/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1101652931/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1101652931/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T" /mount1: exit status 1 (293.969181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:52:29.795608  383008 retry.go:31] will retry after 525.058956ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-216033 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1101652931/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1101652931/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-216033 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1101652931/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-216033 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.60:30622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-216033
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-216033
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-216033
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (224.82s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-506412 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1217 01:26:24.054051  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-506412 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m30.356085154s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-506412 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-506412 cache add gcr.io/k8s-minikube/gvisor-addon:2: (6.217579134s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-506412 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-506412 addons enable gvisor: (5.598537022s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [9dc7a911-3d4a-4ac8-a7e5-5bfd280ebcda] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004315855s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-506412 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [daae4a15-8ce5-4cb9-845b-49dc6697a7be] Pending
helpers_test.go:353: "nginx-gvisor" [daae4a15-8ce5-4cb9-845b-49dc6697a7be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-gvisor" [daae4a15-8ce5-4cb9-845b-49dc6697a7be] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 48.005839477s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-506412
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-506412: (7.982525305s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-506412 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-506412 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (46.504132683s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [9dc7a911-3d4a-4ac8-a7e5-5bfd280ebcda] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:353: "gvisor" [9dc7a911-3d4a-4ac8-a7e5-5bfd280ebcda] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004761274s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [daae4a15-8ce5-4cb9-845b-49dc6697a7be] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 6.003270287s
helpers_test.go:176: Cleaning up "gvisor-506412" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-506412
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-506412: (1.964257708s)
--- PASS: TestGvisorAddon (224.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1217 00:53:51.784370  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:54:32.746521  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:55:54.668750  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:56:24.054040  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (3m26.226613554s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (206.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 kubectl -- rollout status deployment/busybox: (4.236303226s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-6tvqd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-btjql -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-rfhc7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-6tvqd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-btjql -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-rfhc7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-6tvqd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-btjql -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-rfhc7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-6tvqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-6tvqd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-btjql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-btjql -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-rfhc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 kubectl -- exec busybox-7b57f96db7-rfhc7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node add --alsologtostderr -v 5
E1217 00:57:19.415651  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:19.422305  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:19.433753  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:19.455269  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:19.496830  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:19.578468  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:19.740025  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:20.061826  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:20.703984  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:21.986058  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:24.548343  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:29.670153  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:57:39.912287  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 node add --alsologtostderr -v 5: (46.012549086s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-346541 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp testdata/cp-test.txt ha-346541:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2977735014/001/cp-test_ha-346541.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541:/home/docker/cp-test.txt ha-346541-m02:/home/docker/cp-test_ha-346541_ha-346541-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test_ha-346541_ha-346541-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541:/home/docker/cp-test.txt ha-346541-m03:/home/docker/cp-test_ha-346541_ha-346541-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test_ha-346541_ha-346541-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541:/home/docker/cp-test.txt ha-346541-m04:/home/docker/cp-test_ha-346541_ha-346541-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test_ha-346541_ha-346541-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp testdata/cp-test.txt ha-346541-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2977735014/001/cp-test_ha-346541-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m02:/home/docker/cp-test.txt ha-346541:/home/docker/cp-test_ha-346541-m02_ha-346541.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test_ha-346541-m02_ha-346541.txt"
E1217 00:58:00.394333  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m02:/home/docker/cp-test.txt ha-346541-m03:/home/docker/cp-test_ha-346541-m02_ha-346541-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test_ha-346541-m02_ha-346541-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m02:/home/docker/cp-test.txt ha-346541-m04:/home/docker/cp-test_ha-346541-m02_ha-346541-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test_ha-346541-m02_ha-346541-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp testdata/cp-test.txt ha-346541-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2977735014/001/cp-test_ha-346541-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m03:/home/docker/cp-test.txt ha-346541:/home/docker/cp-test_ha-346541-m03_ha-346541.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test_ha-346541-m03_ha-346541.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m03:/home/docker/cp-test.txt ha-346541-m02:/home/docker/cp-test_ha-346541-m03_ha-346541-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test_ha-346541-m03_ha-346541-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m03:/home/docker/cp-test.txt ha-346541-m04:/home/docker/cp-test_ha-346541-m03_ha-346541-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test_ha-346541-m03_ha-346541-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp testdata/cp-test.txt ha-346541-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2977735014/001/cp-test_ha-346541-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m04:/home/docker/cp-test.txt ha-346541:/home/docker/cp-test_ha-346541-m04_ha-346541.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541 "sudo cat /home/docker/cp-test_ha-346541-m04_ha-346541.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m04:/home/docker/cp-test.txt ha-346541-m02:/home/docker/cp-test_ha-346541-m04_ha-346541-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m02 "sudo cat /home/docker/cp-test_ha-346541-m04_ha-346541-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 cp ha-346541-m04:/home/docker/cp-test.txt ha-346541-m03:/home/docker/cp-test_ha-346541-m04_ha-346541-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 ssh -n ha-346541-m03 "sudo cat /home/docker/cp-test_ha-346541-m04_ha-346541-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node stop m02 --alsologtostderr -v 5
E1217 00:58:10.806703  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 node stop m02 --alsologtostderr -v 5: (12.702486991s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5: exit status 7 (478.19704ms)

                                                
                                                
-- stdout --
	ha-346541
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-346541-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-346541-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-346541-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:58:19.394066  395517 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:58:19.394313  395517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:58:19.394321  395517 out.go:374] Setting ErrFile to fd 2...
	I1217 00:58:19.394326  395517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:58:19.394522  395517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 00:58:19.394681  395517 out.go:368] Setting JSON to false
	I1217 00:58:19.394708  395517 mustload.go:66] Loading cluster: ha-346541
	I1217 00:58:19.394826  395517 notify.go:221] Checking for updates...
	I1217 00:58:19.395117  395517 config.go:182] Loaded profile config "ha-346541": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 00:58:19.395135  395517 status.go:174] checking status of ha-346541 ...
	I1217 00:58:19.397381  395517 status.go:371] ha-346541 host status = "Running" (err=<nil>)
	I1217 00:58:19.397398  395517 host.go:66] Checking if "ha-346541" exists ...
	I1217 00:58:19.400239  395517 main.go:143] libmachine: domain ha-346541 has defined MAC address 52:54:00:5a:96:95 in network mk-ha-346541
	I1217 00:58:19.400808  395517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5a:96:95", ip: ""} in network mk-ha-346541: {Iface:virbr1 ExpiryTime:2025-12-17 01:53:48 +0000 UTC Type:0 Mac:52:54:00:5a:96:95 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-346541 Clientid:01:52:54:00:5a:96:95}
	I1217 00:58:19.400833  395517 main.go:143] libmachine: domain ha-346541 has defined IP address 192.168.39.165 and MAC address 52:54:00:5a:96:95 in network mk-ha-346541
	I1217 00:58:19.401007  395517 host.go:66] Checking if "ha-346541" exists ...
	I1217 00:58:19.401201  395517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:58:19.403514  395517 main.go:143] libmachine: domain ha-346541 has defined MAC address 52:54:00:5a:96:95 in network mk-ha-346541
	I1217 00:58:19.403974  395517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5a:96:95", ip: ""} in network mk-ha-346541: {Iface:virbr1 ExpiryTime:2025-12-17 01:53:48 +0000 UTC Type:0 Mac:52:54:00:5a:96:95 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-346541 Clientid:01:52:54:00:5a:96:95}
	I1217 00:58:19.404001  395517 main.go:143] libmachine: domain ha-346541 has defined IP address 192.168.39.165 and MAC address 52:54:00:5a:96:95 in network mk-ha-346541
	I1217 00:58:19.404242  395517 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/ha-346541/id_rsa Username:docker}
	I1217 00:58:19.486426  395517 ssh_runner.go:195] Run: systemctl --version
	I1217 00:58:19.492963  395517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:58:19.509136  395517 kubeconfig.go:125] found "ha-346541" server: "https://192.168.39.254:8443"
	I1217 00:58:19.509185  395517 api_server.go:166] Checking apiserver status ...
	I1217 00:58:19.509219  395517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:19.529847  395517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2490/cgroup
	W1217 00:58:19.541469  395517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:58:19.541533  395517 ssh_runner.go:195] Run: ls
	I1217 00:58:19.546433  395517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 00:58:19.551024  395517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 00:58:19.551045  395517 status.go:463] ha-346541 apiserver status = Running (err=<nil>)
	I1217 00:58:19.551055  395517 status.go:176] ha-346541 status: &{Name:ha-346541 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:58:19.551070  395517 status.go:174] checking status of ha-346541-m02 ...
	I1217 00:58:19.552588  395517 status.go:371] ha-346541-m02 host status = "Stopped" (err=<nil>)
	I1217 00:58:19.552607  395517 status.go:384] host is not running, skipping remaining checks
	I1217 00:58:19.552614  395517 status.go:176] ha-346541-m02 status: &{Name:ha-346541-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:58:19.552632  395517 status.go:174] checking status of ha-346541-m03 ...
	I1217 00:58:19.554198  395517 status.go:371] ha-346541-m03 host status = "Running" (err=<nil>)
	I1217 00:58:19.554216  395517 host.go:66] Checking if "ha-346541-m03" exists ...
	I1217 00:58:19.556813  395517 main.go:143] libmachine: domain ha-346541-m03 has defined MAC address 52:54:00:29:46:31 in network mk-ha-346541
	I1217 00:58:19.557198  395517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:46:31", ip: ""} in network mk-ha-346541: {Iface:virbr1 ExpiryTime:2025-12-17 01:55:48 +0000 UTC Type:0 Mac:52:54:00:29:46:31 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-346541-m03 Clientid:01:52:54:00:29:46:31}
	I1217 00:58:19.557226  395517 main.go:143] libmachine: domain ha-346541-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:29:46:31 in network mk-ha-346541
	I1217 00:58:19.557353  395517 host.go:66] Checking if "ha-346541-m03" exists ...
	I1217 00:58:19.557534  395517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:58:19.559510  395517 main.go:143] libmachine: domain ha-346541-m03 has defined MAC address 52:54:00:29:46:31 in network mk-ha-346541
	I1217 00:58:19.560008  395517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:46:31", ip: ""} in network mk-ha-346541: {Iface:virbr1 ExpiryTime:2025-12-17 01:55:48 +0000 UTC Type:0 Mac:52:54:00:29:46:31 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-346541-m03 Clientid:01:52:54:00:29:46:31}
	I1217 00:58:19.560036  395517 main.go:143] libmachine: domain ha-346541-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:29:46:31 in network mk-ha-346541
	I1217 00:58:19.560220  395517 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/ha-346541-m03/id_rsa Username:docker}
	I1217 00:58:19.646141  395517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:58:19.668070  395517 kubeconfig.go:125] found "ha-346541" server: "https://192.168.39.254:8443"
	I1217 00:58:19.668099  395517 api_server.go:166] Checking apiserver status ...
	I1217 00:58:19.668142  395517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:19.687499  395517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2431/cgroup
	W1217 00:58:19.698818  395517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2431/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:58:19.698878  395517 ssh_runner.go:195] Run: ls
	I1217 00:58:19.703861  395517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 00:58:19.708545  395517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 00:58:19.708566  395517 status.go:463] ha-346541-m03 apiserver status = Running (err=<nil>)
	I1217 00:58:19.708575  395517 status.go:176] ha-346541-m03 status: &{Name:ha-346541-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:58:19.708591  395517 status.go:174] checking status of ha-346541-m04 ...
	I1217 00:58:19.710314  395517 status.go:371] ha-346541-m04 host status = "Running" (err=<nil>)
	I1217 00:58:19.710337  395517 host.go:66] Checking if "ha-346541-m04" exists ...
	I1217 00:58:19.713110  395517 main.go:143] libmachine: domain ha-346541-m04 has defined MAC address 52:54:00:73:5f:b3 in network mk-ha-346541
	I1217 00:58:19.713533  395517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:5f:b3", ip: ""} in network mk-ha-346541: {Iface:virbr1 ExpiryTime:2025-12-17 01:57:24 +0000 UTC Type:0 Mac:52:54:00:73:5f:b3 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-346541-m04 Clientid:01:52:54:00:73:5f:b3}
	I1217 00:58:19.713555  395517 main.go:143] libmachine: domain ha-346541-m04 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:5f:b3 in network mk-ha-346541
	I1217 00:58:19.713723  395517 host.go:66] Checking if "ha-346541-m04" exists ...
	I1217 00:58:19.714007  395517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:58:19.716229  395517 main.go:143] libmachine: domain ha-346541-m04 has defined MAC address 52:54:00:73:5f:b3 in network mk-ha-346541
	I1217 00:58:19.716621  395517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:5f:b3", ip: ""} in network mk-ha-346541: {Iface:virbr1 ExpiryTime:2025-12-17 01:57:24 +0000 UTC Type:0 Mac:52:54:00:73:5f:b3 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-346541-m04 Clientid:01:52:54:00:73:5f:b3}
	I1217 00:58:19.716645  395517 main.go:143] libmachine: domain ha-346541-m04 has defined IP address 192.168.39.71 and MAC address 52:54:00:73:5f:b3 in network mk-ha-346541
	I1217 00:58:19.716787  395517 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/ha-346541-m04/id_rsa Username:docker}
	I1217 00:58:19.794437  395517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:58:19.810211  395517 status.go:176] ha-346541-m04 status: &{Name:ha-346541-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node start m02 --alsologtostderr -v 5
E1217 00:58:38.510518  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:58:41.356670  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 node start m02 --alsologtostderr -v 5: (28.623098606s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (163.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 stop --alsologtostderr -v 5: (40.44210824s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 start --wait true --alsologtostderr -v 5
E1217 01:00:03.279491  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:24.053806  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 start --wait true --alsologtostderr -v 5: (2m2.739896991s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (163.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 node delete m03 --alsologtostderr -v 5: (6.360862168s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 stop --alsologtostderr -v 5: (37.610212479s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5: exit status 7 (65.507386ms)

                                                
                                                
-- stdout --
	ha-346541
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-346541-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-346541-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:02:19.058810  397090 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:02:19.058978  397090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:02:19.058989  397090 out.go:374] Setting ErrFile to fd 2...
	I1217 01:02:19.058993  397090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:02:19.059210  397090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:02:19.059372  397090 out.go:368] Setting JSON to false
	I1217 01:02:19.059398  397090 mustload.go:66] Loading cluster: ha-346541
	I1217 01:02:19.059523  397090 notify.go:221] Checking for updates...
	I1217 01:02:19.059730  397090 config.go:182] Loaded profile config "ha-346541": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:02:19.059748  397090 status.go:174] checking status of ha-346541 ...
	I1217 01:02:19.061644  397090 status.go:371] ha-346541 host status = "Stopped" (err=<nil>)
	I1217 01:02:19.061660  397090 status.go:384] host is not running, skipping remaining checks
	I1217 01:02:19.061665  397090 status.go:176] ha-346541 status: &{Name:ha-346541 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:02:19.061683  397090 status.go:174] checking status of ha-346541-m02 ...
	I1217 01:02:19.062741  397090 status.go:371] ha-346541-m02 host status = "Stopped" (err=<nil>)
	I1217 01:02:19.062754  397090 status.go:384] host is not running, skipping remaining checks
	I1217 01:02:19.062759  397090 status.go:176] ha-346541-m02 status: &{Name:ha-346541-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:02:19.062769  397090 status.go:174] checking status of ha-346541-m04 ...
	I1217 01:02:19.063947  397090 status.go:371] ha-346541-m04 host status = "Stopped" (err=<nil>)
	I1217 01:02:19.063959  397090 status.go:384] host is not running, skipping remaining checks
	I1217 01:02:19.063963  397090 status.go:176] ha-346541-m04 status: &{Name:ha-346541-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (111.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 start --wait true --alsologtostderr -v 5 --driver=kvm2 
E1217 01:02:19.415534  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:02:47.123477  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:02:47.125632  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:03:10.807130  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (1m51.10055112s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (111.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (111.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-346541 node add --control-plane --alsologtostderr -v 5: (1m50.66836005s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-346541 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (111.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-822264 --driver=kvm2 
E1217 01:06:24.056289  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-822264 --driver=kvm2 : (40.005190142s)
--- PASS: TestImageBuild/serial/Setup (40.01s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-822264
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-822264: (1.501546099s)
--- PASS: TestImageBuild/serial/NormalBuild (1.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-822264
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-822264
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-822264
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-537197 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E1217 01:07:19.415577  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:08:10.806887  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-537197 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m20.010319792s)
--- PASS: TestJSONOutput/start/Command (80.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-537197 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-537197 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-537197 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-537197 --output=json --user=testUser: (14.014054665s)
--- PASS: TestJSONOutput/stop/Command (14.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-097674 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-097674 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.199787ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1fa34a6b-abf5-421a-9a40-8c0f05da8cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-097674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e36ec83-e314-482d-88cf-9267898a7180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22140"}}
	{"specversion":"1.0","id":"95edf349-9158-4c0e-ba33-0ed7f5889ea9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e492b340-5fe7-4883-b781-5755fd51ebbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig"}}
	{"specversion":"1.0","id":"26f298cd-2897-483c-be33-9c491abde858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube"}}
	{"specversion":"1.0","id":"3beef967-a07b-4a22-9283-4d5b6f868d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5f9da3f3-e295-4ea5-b7bb-f5cc55ad3fba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"30920792-0a41-4a21-b354-05791f22e9dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-097674" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-097674
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (85.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-761894 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-761894 --driver=kvm2 : (42.629289228s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-764144 --driver=kvm2 
E1217 01:09:33.872006  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-764144 --driver=kvm2 : (40.595874752s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-761894
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-764144
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-764144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-764144
helpers_test.go:176: Cleaning up "first-761894" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-761894
--- PASS: TestMinikubeProfile (85.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-161611 --memory=3072 --mount-string /tmp/TestMountStartserial1837554364/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-161611 --memory=3072 --mount-string /tmp/TestMountStartserial1837554364/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (19.784837345s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-161611 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-161611 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-185337 --memory=3072 --mount-string /tmp/TestMountStartserial1837554364/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-185337 --memory=3072 --mount-string /tmp/TestMountStartserial1837554364/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (19.283575502s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185337 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185337 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-161611 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185337 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185337 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-185337
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-185337: (1.265987386s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-185337
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-185337: (19.394589002s)
--- PASS: TestMountStart/serial/RestartStopped (20.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185337 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185337 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849738 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1217 01:11:24.054283  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:12:19.416017  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849738 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (1m50.481307925s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-849738 -- rollout status deployment/busybox: (3.299038901s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-89t4q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-ck27r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-89t4q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-ck27r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-89t4q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-ck27r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-89t4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-89t4q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-ck27r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849738 -- exec busybox-7b57f96db7-ck27r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-849738 -v=5 --alsologtostderr
E1217 01:13:10.807137  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-849738 -v=5 --alsologtostderr: (46.599321786s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-849738 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1217 01:13:42.485850  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp testdata/cp-test.txt multinode-849738:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3358627668/001/cp-test_multinode-849738.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738:/home/docker/cp-test.txt multinode-849738-m02:/home/docker/cp-test_multinode-849738_multinode-849738-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m02 "sudo cat /home/docker/cp-test_multinode-849738_multinode-849738-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738:/home/docker/cp-test.txt multinode-849738-m03:/home/docker/cp-test_multinode-849738_multinode-849738-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m03 "sudo cat /home/docker/cp-test_multinode-849738_multinode-849738-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp testdata/cp-test.txt multinode-849738-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3358627668/001/cp-test_multinode-849738-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738-m02:/home/docker/cp-test.txt multinode-849738:/home/docker/cp-test_multinode-849738-m02_multinode-849738.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738 "sudo cat /home/docker/cp-test_multinode-849738-m02_multinode-849738.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738-m02:/home/docker/cp-test.txt multinode-849738-m03:/home/docker/cp-test_multinode-849738-m02_multinode-849738-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m03 "sudo cat /home/docker/cp-test_multinode-849738-m02_multinode-849738-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp testdata/cp-test.txt multinode-849738-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3358627668/001/cp-test_multinode-849738-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738-m03:/home/docker/cp-test.txt multinode-849738:/home/docker/cp-test_multinode-849738-m03_multinode-849738.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738 "sudo cat /home/docker/cp-test_multinode-849738-m03_multinode-849738.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 cp multinode-849738-m03:/home/docker/cp-test.txt multinode-849738-m02:/home/docker/cp-test_multinode-849738-m03_multinode-849738-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 ssh -n multinode-849738-m02 "sudo cat /home/docker/cp-test_multinode-849738-m03_multinode-849738-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-849738 node stop m03: (1.624016024s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849738 status: exit status 7 (315.31071ms)

                                                
                                                
-- stdout --
	multinode-849738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849738-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-849738-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr: exit status 7 (328.174976ms)

                                                
                                                
-- stdout --
	multinode-849738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849738-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-849738-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:13:50.716743  403305 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:13:50.716931  403305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:13:50.716946  403305 out.go:374] Setting ErrFile to fd 2...
	I1217 01:13:50.716955  403305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:13:50.717134  403305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:13:50.717331  403305 out.go:368] Setting JSON to false
	I1217 01:13:50.717362  403305 mustload.go:66] Loading cluster: multinode-849738
	I1217 01:13:50.717640  403305 notify.go:221] Checking for updates...
	I1217 01:13:50.718652  403305 config.go:182] Loaded profile config "multinode-849738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:13:50.718684  403305 status.go:174] checking status of multinode-849738 ...
	I1217 01:13:50.721065  403305 status.go:371] multinode-849738 host status = "Running" (err=<nil>)
	I1217 01:13:50.721084  403305 host.go:66] Checking if "multinode-849738" exists ...
	I1217 01:13:50.723811  403305 main.go:143] libmachine: domain multinode-849738 has defined MAC address 52:54:00:05:81:ab in network mk-multinode-849738
	I1217 01:13:50.724289  403305 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:81:ab", ip: ""} in network mk-multinode-849738: {Iface:virbr1 ExpiryTime:2025-12-17 02:11:13 +0000 UTC Type:0 Mac:52:54:00:05:81:ab Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:multinode-849738 Clientid:01:52:54:00:05:81:ab}
	I1217 01:13:50.724317  403305 main.go:143] libmachine: domain multinode-849738 has defined IP address 192.168.39.103 and MAC address 52:54:00:05:81:ab in network mk-multinode-849738
	I1217 01:13:50.724445  403305 host.go:66] Checking if "multinode-849738" exists ...
	I1217 01:13:50.724646  403305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:13:50.726998  403305 main.go:143] libmachine: domain multinode-849738 has defined MAC address 52:54:00:05:81:ab in network mk-multinode-849738
	I1217 01:13:50.727424  403305 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:81:ab", ip: ""} in network mk-multinode-849738: {Iface:virbr1 ExpiryTime:2025-12-17 02:11:13 +0000 UTC Type:0 Mac:52:54:00:05:81:ab Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:multinode-849738 Clientid:01:52:54:00:05:81:ab}
	I1217 01:13:50.727446  403305 main.go:143] libmachine: domain multinode-849738 has defined IP address 192.168.39.103 and MAC address 52:54:00:05:81:ab in network mk-multinode-849738
	I1217 01:13:50.727618  403305 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/multinode-849738/id_rsa Username:docker}
	I1217 01:13:50.803525  403305 ssh_runner.go:195] Run: systemctl --version
	I1217 01:13:50.809334  403305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:13:50.825831  403305 kubeconfig.go:125] found "multinode-849738" server: "https://192.168.39.103:8443"
	I1217 01:13:50.825868  403305 api_server.go:166] Checking apiserver status ...
	I1217 01:13:50.825923  403305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:13:50.848066  403305 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2428/cgroup
	W1217 01:13:50.858745  403305 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:13:50.858795  403305 ssh_runner.go:195] Run: ls
	I1217 01:13:50.863419  403305 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I1217 01:13:50.871297  403305 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I1217 01:13:50.871322  403305 status.go:463] multinode-849738 apiserver status = Running (err=<nil>)
	I1217 01:13:50.871335  403305 status.go:176] multinode-849738 status: &{Name:multinode-849738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:13:50.871360  403305 status.go:174] checking status of multinode-849738-m02 ...
	I1217 01:13:50.873191  403305 status.go:371] multinode-849738-m02 host status = "Running" (err=<nil>)
	I1217 01:13:50.873214  403305 host.go:66] Checking if "multinode-849738-m02" exists ...
	I1217 01:13:50.875848  403305 main.go:143] libmachine: domain multinode-849738-m02 has defined MAC address 52:54:00:00:c1:c4 in network mk-multinode-849738
	I1217 01:13:50.876297  403305 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:c1:c4", ip: ""} in network mk-multinode-849738: {Iface:virbr1 ExpiryTime:2025-12-17 02:12:15 +0000 UTC Type:0 Mac:52:54:00:00:c1:c4 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-849738-m02 Clientid:01:52:54:00:00:c1:c4}
	I1217 01:13:50.876341  403305 main.go:143] libmachine: domain multinode-849738-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:00:c1:c4 in network mk-multinode-849738
	I1217 01:13:50.876496  403305 host.go:66] Checking if "multinode-849738-m02" exists ...
	I1217 01:13:50.876735  403305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:13:50.879252  403305 main.go:143] libmachine: domain multinode-849738-m02 has defined MAC address 52:54:00:00:c1:c4 in network mk-multinode-849738
	I1217 01:13:50.879656  403305 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:c1:c4", ip: ""} in network mk-multinode-849738: {Iface:virbr1 ExpiryTime:2025-12-17 02:12:15 +0000 UTC Type:0 Mac:52:54:00:00:c1:c4 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-849738-m02 Clientid:01:52:54:00:00:c1:c4}
	I1217 01:13:50.879680  403305 main.go:143] libmachine: domain multinode-849738-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:00:c1:c4 in network mk-multinode-849738
	I1217 01:13:50.879834  403305 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22140-379084/.minikube/machines/multinode-849738-m02/id_rsa Username:docker}
	I1217 01:13:50.966276  403305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:13:50.982214  403305 status.go:176] multinode-849738-m02 status: &{Name:multinode-849738-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:13:50.982255  403305 status.go:174] checking status of multinode-849738-m03 ...
	I1217 01:13:50.983882  403305 status.go:371] multinode-849738-m03 host status = "Stopped" (err=<nil>)
	I1217 01:13:50.983898  403305 status.go:384] host is not running, skipping remaining checks
	I1217 01:13:50.983914  403305 status.go:176] multinode-849738-m03 status: &{Name:multinode-849738-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-849738 node start m03 -v=5 --alsologtostderr: (37.639500526s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (162.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-849738
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-849738
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-849738: (25.29324455s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849738 --wait=true -v=5 --alsologtostderr
E1217 01:16:24.054088  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849738 --wait=true -v=5 --alsologtostderr: (2m16.994388686s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-849738
--- PASS: TestMultiNode/serial/RestartKeepsNodes (162.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-849738 node delete m03: (1.581549285s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 stop
E1217 01:17:19.420298  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-849738 stop: (24.765013802s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849738 status: exit status 7 (64.131962ms)

                                                
                                                
-- stdout --
	multinode-849738
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-849738-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr: exit status 7 (65.842202ms)

                                                
                                                
-- stdout --
	multinode-849738
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-849738-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:17:38.442091  404656 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:17:38.442206  404656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:38.442215  404656 out.go:374] Setting ErrFile to fd 2...
	I1217 01:17:38.442219  404656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:38.442407  404656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:17:38.442587  404656 out.go:368] Setting JSON to false
	I1217 01:17:38.442615  404656 mustload.go:66] Loading cluster: multinode-849738
	I1217 01:17:38.442667  404656 notify.go:221] Checking for updates...
	I1217 01:17:38.442999  404656 config.go:182] Loaded profile config "multinode-849738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:17:38.443018  404656 status.go:174] checking status of multinode-849738 ...
	I1217 01:17:38.445048  404656 status.go:371] multinode-849738 host status = "Stopped" (err=<nil>)
	I1217 01:17:38.445062  404656 status.go:384] host is not running, skipping remaining checks
	I1217 01:17:38.445067  404656 status.go:176] multinode-849738 status: &{Name:multinode-849738 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:17:38.445080  404656 status.go:174] checking status of multinode-849738-m02 ...
	I1217 01:17:38.446383  404656 status.go:371] multinode-849738-m02 host status = "Stopped" (err=<nil>)
	I1217 01:17:38.446400  404656 status.go:384] host is not running, skipping remaining checks
	I1217 01:17:38.446407  404656 status.go:176] multinode-849738-m02 status: &{Name:multinode-849738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (97.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849738 --wait=true -v=5 --alsologtostderr --driver=kvm2 
E1217 01:18:10.806992  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849738 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (1m36.899449901s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849738 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (97.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-849738
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849738-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-849738-m02 --driver=kvm2 : exit status 14 (76.009876ms)

                                                
                                                
-- stdout --
	* [multinode-849738-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-849738-m02' is duplicated with machine name 'multinode-849738-m02' in profile 'multinode-849738'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849738-m03 --driver=kvm2 
E1217 01:19:27.129061  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849738-m03 --driver=kvm2 : (41.613893777s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-849738
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-849738: exit status 80 (196.852511ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-849738 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-849738-m03 already exists in multinode-849738-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-849738-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.77s)

                                                
                                    
x
+
TestPreload (138.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-999729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 
E1217 01:21:24.053789  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-999729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 : (1m27.53324562s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-999729 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-999729 image pull gcr.io/k8s-minikube/busybox: (2.249771208s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-999729
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-999729: (6.995614835s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-999729 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-999729 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 : (41.093471701s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-999729 image list
helpers_test.go:176: Cleaning up "test-preload-999729" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-999729
--- PASS: TestPreload (138.90s)

                                                
                                    
x
+
TestScheduledStopUnix (110.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-221083 --memory=3072 --driver=kvm2 
E1217 01:22:19.416220  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-221083 --memory=3072 --driver=kvm2 : (39.138152254s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221083 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:22:58.154643  407286 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:22:58.154887  407286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:58.154895  407286 out.go:374] Setting ErrFile to fd 2...
	I1217 01:22:58.154899  407286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:58.155135  407286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:22:58.155380  407286 out.go:368] Setting JSON to false
	I1217 01:22:58.155461  407286 mustload.go:66] Loading cluster: scheduled-stop-221083
	I1217 01:22:58.155760  407286 config.go:182] Loaded profile config "scheduled-stop-221083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:22:58.155828  407286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/config.json ...
	I1217 01:22:58.156018  407286 mustload.go:66] Loading cluster: scheduled-stop-221083
	I1217 01:22:58.156124  407286 config.go:182] Loaded profile config "scheduled-stop-221083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-221083 -n scheduled-stop-221083
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221083 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:22:58.449677  407330 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:22:58.449944  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:58.449953  407330 out.go:374] Setting ErrFile to fd 2...
	I1217 01:22:58.449957  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:58.450138  407330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:22:58.450424  407330 out.go:368] Setting JSON to false
	I1217 01:22:58.450624  407330 daemonize_unix.go:73] killing process 407319 as it is an old scheduled stop
	I1217 01:22:58.450729  407330 mustload.go:66] Loading cluster: scheduled-stop-221083
	I1217 01:22:58.451092  407330 config.go:182] Loaded profile config "scheduled-stop-221083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:22:58.451183  407330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/config.json ...
	I1217 01:22:58.451392  407330 mustload.go:66] Loading cluster: scheduled-stop-221083
	I1217 01:22:58.451539  407330 config.go:182] Loaded profile config "scheduled-stop-221083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 01:22:58.457225  383008 retry.go:31] will retry after 115.84µs: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.458383  383008 retry.go:31] will retry after 193.281µs: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.459544  383008 retry.go:31] will retry after 178.565µs: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.460704  383008 retry.go:31] will retry after 326.205µs: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.461867  383008 retry.go:31] will retry after 588.641µs: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.463028  383008 retry.go:31] will retry after 671.183µs: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.464180  383008 retry.go:31] will retry after 1.325028ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.466396  383008 retry.go:31] will retry after 1.61028ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.468596  383008 retry.go:31] will retry after 2.597706ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.471822  383008 retry.go:31] will retry after 3.502465ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.476023  383008 retry.go:31] will retry after 6.170819ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.483259  383008 retry.go:31] will retry after 11.276326ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.495482  383008 retry.go:31] will retry after 18.341022ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.514736  383008 retry.go:31] will retry after 11.151874ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.526997  383008 retry.go:31] will retry after 22.812154ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
I1217 01:22:58.550230  383008 retry.go:31] will retry after 48.323671ms: open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221083 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1217 01:23:10.807339  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221083 -n scheduled-stop-221083
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-221083
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221083 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:23:24.173443  407479 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:23:24.173751  407479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:23:24.173762  407479 out.go:374] Setting ErrFile to fd 2...
	I1217 01:23:24.173766  407479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:23:24.174010  407479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22140-379084/.minikube/bin
	I1217 01:23:24.174331  407479 out.go:368] Setting JSON to false
	I1217 01:23:24.174434  407479 mustload.go:66] Loading cluster: scheduled-stop-221083
	I1217 01:23:24.174750  407479 config.go:182] Loaded profile config "scheduled-stop-221083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:23:24.174835  407479 profile.go:143] Saving config to /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/scheduled-stop-221083/config.json ...
	I1217 01:23:24.175107  407479 mustload.go:66] Loading cluster: scheduled-stop-221083
	I1217 01:23:24.175243  407479 config.go:182] Loaded profile config "scheduled-stop-221083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-221083
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-221083: exit status 7 (67.978901ms)

                                                
                                                
-- stdout --
	scheduled-stop-221083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221083 -n scheduled-stop-221083
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221083 -n scheduled-stop-221083: exit status 7 (63.25111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-221083" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-221083
--- PASS: TestScheduledStopUnix (110.80s)

                                                
                                    
x
+
TestSkaffold (120.87s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3631422797 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-413614 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-413614 --memory=3072 --driver=kvm2 : (39.560571805s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3631422797 run --minikube-profile skaffold-413614 --kube-context skaffold-413614 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3631422797 run --minikube-profile skaffold-413614 --kube-context skaffold-413614 --status-check=true --port-forward=false --interactive=false: (1m5.247737211s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-65ffd44f87-r9vh8" [9c1dcda7-7b5d-4b7e-84d6-f3fb7e35a019] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003414769s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-6469b56749-4v67b" [719afcda-e933-4f17-86a1-81f9d70cb8b8] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003322599s
helpers_test.go:176: Cleaning up "skaffold-413614" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-413614
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-413614: (1.037605226s)
--- PASS: TestSkaffold (120.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (370.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2310192155 start -p running-upgrade-360777 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2310192155 start -p running-upgrade-360777 --memory=3072 --vm-driver=kvm2 : (1m11.916136653s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-360777 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E1217 01:31:19.066178  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:31:24.054267  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-360777 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (4m53.95348196s)
helpers_test.go:176: Cleaning up "running-upgrade-360777" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-360777
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-360777: (1.174717012s)
--- PASS: TestRunningBinaryUpgrade (370.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (155.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (45.880660504s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-895942
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-895942: (3.177429256s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-895942 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-895942 status --format={{.Host}}: exit status 7 (67.75284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (34.306300214s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-895942 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (87.632875ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-895942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-895942
	    minikube start -p kubernetes-upgrade-895942 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8959422 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-895942 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
E1217 01:30:22.487801  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-895942 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (1m10.849542148s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-895942" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-895942
--- PASS: TestKubernetesUpgrade (155.28s)

                                                
                                    
x
+
TestISOImage/Setup (57.04s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-625557 --no-kubernetes --driver=kvm2 
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-625557 --no-kubernetes --driver=kvm2 : (57.039484675s)
--- PASS: TestISOImage/Setup (57.04s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-625557 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3528383658 start -p stopped-upgrade-663118 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3528383658 start -p stopped-upgrade-663118 --memory=3072 --vm-driver=kvm2 : (1m21.574277551s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3528383658 -p stopped-upgrade-663118 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3528383658 -p stopped-upgrade-663118 stop: (13.796199529s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-663118 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E1217 01:30:58.571063  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:58.577448  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:58.588856  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:58.610263  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:58.651667  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:58.733157  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:58.894769  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:59.216547  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:59.858902  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:31:01.140473  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:31:03.702090  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:31:08.823693  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-663118 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (30.926748481s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-310397 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-310397 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (83.820744ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-310397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22140
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22140-379084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22140-379084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-310397 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-310397 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (45.305492526s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-310397 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-663118
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-663118: (1.167258521s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (107.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-596693 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
E1217 01:31:39.548760  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-596693 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m47.965733927s)
--- PASS: TestPause/serial/Start (107.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-310397 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E1217 01:32:19.415758  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:20.510782  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-310397 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (14.036261545s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-310397 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-310397 status -o json: exit status 2 (258.443295ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-310397","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-310397
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-310397: (1.044700909s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-310397 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-310397 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (21.806464766s)
--- PASS: TestNoKubernetes/serial/Start (21.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m13.339337309s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22140-379084/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-310397 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-310397 "sudo systemctl is-active --quiet service kubelet": exit status 1 (178.763295ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-310397
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-310397: (1.34967608s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-310397 --driver=kvm2 
E1217 01:32:56.746281  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:56.752703  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:56.764249  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:56.785689  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:56.827240  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:56.908775  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:57.070411  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:57.392201  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:58.034534  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:32:59.316622  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:01.878080  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:06.999375  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:10.806587  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:17.240828  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-310397 --driver=kvm2 : (35.075706755s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-596693 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-596693 --alsologtostderr -v=1 --driver=kvm2 : (1m2.91815604s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (62.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-310397 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-310397 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.169713ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1217 01:33:37.722195  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:42.432231  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m13.371412701s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-739084 "pgrep -a kubelet"
I1217 01:33:46.832078  383008 config.go:182] Loaded profile config "auto-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ncgsd" [1526c444-0172-4f36-834d-c32e59bb3142] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ncgsd" [1526c444-0172-4f36-834d-c32e59bb3142] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005194789s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1217 01:34:18.684049  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m33.091271567s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-596693 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-596693 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-596693 --output=json --layout=cluster: exit status 2 (259.15662ms)

                                                
                                                
-- stdout --
	{"Name":"pause-596693","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-596693","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-596693 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-596693 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-596693 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.430889823s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m3.022957449s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-kkq2s" [a0daa9f3-c46b-44f8-8a93-cfec7058b7b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006177991s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-739084 "pgrep -a kubelet"
I1217 01:34:48.232205  383008 config.go:182] Loaded profile config "kindnet-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7ht59" [9bf7b74e-b014-420f-8c3d-a841f62b24d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7ht59" [9bf7b74e-b014-420f-8c3d-a841f62b24d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005327546s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (90.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1217 01:35:40.605995  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m30.693524225s)
--- PASS: TestNetworkPlugins/group/false/Start (90.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-739084 "pgrep -a kubelet"
I1217 01:35:45.051321  383008 config.go:182] Loaded profile config "custom-flannel-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9pjgv" [51f6fdae-bb05-402e-807c-9edc82cabc12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9pjgv" [51f6fdae-bb05-402e-807c-9edc82cabc12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.007106751s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-zmjtj" [6072060f-7728-4015-a864-c402b9ecfaa3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004868419s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-739084 "pgrep -a kubelet"
I1217 01:35:52.781214  383008 config.go:182] Loaded profile config "calico-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lsr9h" [da645288-713c-4cd4-bd6e-cd1f94e73ecc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-lsr9h" [da645288-713c-4cd4-bd6e-cd1f94e73ecc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006334194s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m27.641955127s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m23.054377686s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1217 01:36:24.053766  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:26.274489  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-739084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m31.281282722s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-739084 "pgrep -a kubelet"
I1217 01:36:48.404452  383008 config.go:182] Loaded profile config "false-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-866hq" [22b5135b-d6c5-42a0-b761-145746a5dcbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-866hq" [22b5135b-d6c5-42a0-b761-145746a5dcbe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.005973881s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-988wk" [f93b2648-3ebe-458b-b142-236aad0591a0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005361902s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-739084 "pgrep -a kubelet"
I1217 01:37:38.124865  383008 config.go:182] Loaded profile config "enable-default-cni-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cqzfh" [73276830-da5d-4138-81a9-7dc7352aa103] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cqzfh" [73276830-da5d-4138-81a9-7dc7352aa103] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005050953s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-739084 "pgrep -a kubelet"
I1217 01:37:42.935478  383008 config.go:182] Loaded profile config "flannel-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mtjwq" [403c8ff1-1d3d-40a4-97ef-79a883b07cf1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mtjwq" [403c8ff1-1d3d-40a4-97ef-79a883b07cf1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004715904s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-739084 "pgrep -a kubelet"
I1217 01:37:53.757336  383008 config.go:182] Loaded profile config "bridge-739084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-739084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-s9s77" [b8d64009-169c-4099-bd3f-787c05b9ef3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-s9s77" [b8d64009-169c-4099-bd3f-787c05b9ef3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004672466s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-739084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-739084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (99.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-515862 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-515862 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m39.40705712s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (99.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-234486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-234486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m48.416519481s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-071689 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-071689 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2: (1m32.803978333s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (134.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-136977 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2
E1217 01:38:24.447927  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.073654  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.080019  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.091370  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.112881  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.154308  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.235838  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.397386  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:47.718766  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:48.361131  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:49.643447  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:52.204845  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:57.326444  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:07.568392  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:28.050117  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.031391  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.037896  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.049224  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.070900  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.112363  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.193941  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.356026  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:42.677613  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:43.319391  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:39:44.600774  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-136977 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2: (2m14.960359803s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (134.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-515862 create -f testdata/busybox.yaml
E1217 01:39:47.162246  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ef1b6f6d-3b4b-4e39-96b4-ca4f54b4f8a2] Pending
helpers_test.go:353: "busybox" [ef1b6f6d-3b4b-4e39-96b4-ca4f54b4f8a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ef1b6f6d-3b4b-4e39-96b4-ca4f54b4f8a2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004051305s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-515862 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-071689 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bdf7a2f5-5d6c-43d0-9b57-82312d5ac323] Pending
helpers_test.go:353: "busybox" [bdf7a2f5-5d6c-43d0-9b57-82312d5ac323] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 01:39:52.283684  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [bdf7a2f5-5d6c-43d0-9b57-82312d5ac323] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004904063s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-071689 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-515862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-515862 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-515862 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-515862 --alsologtostderr -v=3: (13.573689406s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-071689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-071689 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-234486 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [46148b2e-3b3a-4cb9-97c9-7ea4f02a3880] Pending
helpers_test.go:353: "busybox" [46148b2e-3b3a-4cb9-97c9-7ea4f02a3880] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 01:40:02.525547  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [46148b2e-3b3a-4cb9-97c9-7ea4f02a3880] Running
E1217 01:40:09.012309  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004356589s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-234486 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-071689 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-071689 --alsologtostderr -v=3: (14.082067068s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-234486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-234486 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-234486 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-234486 --alsologtostderr -v=3: (13.531212437s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-515862 -n old-k8s-version-515862
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-515862 -n old-k8s-version-515862: exit status 7 (81.261093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-515862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-515862 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-515862 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (42.186656631s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-515862 -n old-k8s-version-515862
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071689 -n embed-certs-071689
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071689 -n embed-certs-071689: exit status 7 (66.53442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-071689 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (61.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-071689 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2
E1217 01:40:23.007298  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-071689 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2: (1m0.987185809s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071689 -n embed-certs-071689
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (61.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-234486 -n no-preload-234486
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-234486 -n no-preload-234486: exit status 7 (82.514913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-234486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (67.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-234486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-234486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m6.642122728s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-234486 -n no-preload-234486
E1217 01:41:30.933712  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/auto-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (67.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-136977 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3369e9e7-8e09-459c-a6a4-9595292250b1] Pending
helpers_test.go:353: "busybox" [3369e9e7-8e09-459c-a6a4-9595292250b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3369e9e7-8e09-459c-a6a4-9595292250b1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005464848s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-136977 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-136977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 01:40:45.375250  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:45.381772  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:45.393566  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:45.415066  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:45.456584  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:45.538498  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:45.700002  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-136977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041128148s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-136977 describe deploy/metrics-server -n kube-system
E1217 01:40:46.022004  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-136977 --alsologtostderr -v=3
E1217 01:40:46.596382  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.602952  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.614435  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.636010  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.663534  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.678014  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.760358  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:46.922583  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:47.244718  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:47.886700  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:47.945777  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:49.168146  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:50.507306  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:51.730775  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-136977 --alsologtostderr -v=3: (14.6247355s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-mkmcm" [ca79c86a-42c3-486c-96b3-65a86f0bce24] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1217 01:40:55.629749  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:56.852454  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:58.570506  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/skaffold-413614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-mkmcm" [ca79c86a-42c3-486c-96b3-65a86f0bce24] Running
E1217 01:41:03.968956  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:41:05.872142  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004759903s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977: exit status 7 (80.38094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-136977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-136977 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-136977 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2: (49.09911089s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-mkmcm" [ca79c86a-42c3-486c-96b3-65a86f0bce24] Running
E1217 01:41:07.095299  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005355261s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-515862 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-515862 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-515862 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-515862 -n old-k8s-version-515862
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-515862 -n old-k8s-version-515862: exit status 2 (250.879449ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-515862 -n old-k8s-version-515862
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-515862 -n old-k8s-version-515862: exit status 2 (249.115376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-515862 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-515862 -n old-k8s-version-515862
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-515862 -n old-k8s-version-515862
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pq792" [6f9f6bea-be7b-4bee-bc25-52427f4009e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004828254s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-374011 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-374011 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (53.713495563s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pq792" [6f9f6bea-be7b-4bee-bc25-52427f4009e6] Running
E1217 01:41:24.054424  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/addons-411941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:41:26.353447  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/custom-flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004635768s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-071689 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-071689 image list --format=json
E1217 01:41:27.577564  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/calico-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-071689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071689 -n embed-certs-071689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071689 -n embed-certs-071689: exit status 2 (275.634209ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-071689 -n embed-certs-071689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-071689 -n embed-certs-071689: exit status 2 (255.86747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-071689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-071689 --alsologtostderr -v=1: (1.501194842s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071689 -n embed-certs-071689
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-071689 -n embed-certs-071689
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f2hlt" [7fce3ecc-20c3-4856-a372-8e3ab85df282] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006981207s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f2hlt" [7fce3ecc-20c3-4856-a372-8e3ab85df282] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003686618s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-234486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-234486 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-234486 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-234486 -n no-preload-234486
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-234486 -n no-preload-234486: exit status 2 (273.306596ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-234486 -n no-preload-234486
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-234486 -n no-preload-234486: exit status 2 (260.779978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-234486 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-234486 -n no-preload-234486
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-234486 -n no-preload-234486
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-k6tbj" [43aa47e6-59e1-4adb-aa92-5b7543946899] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1217 01:41:51.191235  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/false-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-k6tbj" [43aa47e6-59e1-4adb-aa92-5b7543946899] Running
E1217 01:41:53.753020  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/false-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00444982s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-k6tbj" [43aa47e6-59e1-4adb-aa92-5b7543946899] Running
E1217 01:41:58.875197  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/false-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005078204s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-136977 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-136977 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-136977 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977: exit status 2 (243.524393ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977: exit status 2 (220.061222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-136977 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136977 -n default-k8s-diff-port-136977
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-374011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-374011 --alsologtostderr -v=3
E1217 01:42:19.415617  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-216033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-374011 --alsologtostderr -v=3: (14.432865964s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (14.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374011 -n newest-cni-374011
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374011 -n newest-cni-374011: exit status 7 (60.311127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-374011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-374011 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1217 01:42:25.890881  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/kindnet-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:29.599729  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/false-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:36.685154  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:36.691522  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:36.702878  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:36.724274  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:36.765840  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:36.847331  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:37.008963  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:37.331221  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:37.973254  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.387172  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.393561  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.404922  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.426249  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.467616  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.549014  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:38.710834  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:39.032522  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:39.255129  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:39.674441  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:40.956122  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:41.816535  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:43.518298  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:46.938013  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:48.640524  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/enable-default-cni-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:53.876158  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:53.994793  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:54.001238  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:54.012732  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:54.034254  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:54.075918  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:54.157454  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:42:54.319745  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-374011 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (28.809889698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374011 -n newest-cni-374011
E1217 01:42:54.641571  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-374011 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-374011 --alsologtostderr -v=1
E1217 01:42:55.283691  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374011 -n newest-cni-374011
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374011 -n newest-cni-374011: exit status 2 (206.668669ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374011 -n newest-cni-374011
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374011 -n newest-cni-374011: exit status 2 (203.670596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-374011 --alsologtostderr -v=1
E1217 01:42:56.565877  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/bridge-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374011 -n newest-cni-374011
E1217 01:42:56.745997  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/gvisor-506412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374011 -n newest-cni-374011
E1217 01:42:57.179351  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/flannel-739084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                    

Test skip (45/447)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
155 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
156 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
157 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
158 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
159 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
160 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
161 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
162 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
289 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
317 TestKicCustomNetwork 0
318 TestKicExistingNetwork 0
319 TestKicCustomSubnet 0
320 TestKicStaticIP 0
352 TestChangeNoneUser 0
355 TestScheduledStopWindows 0
359 TestInsufficientStorage 0
363 TestMissingContainerUpgrade 0
375 TestNetworkPlugins/group/cilium 3.94
393 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1217 01:26:13.874290  383008 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22140-379084/.minikube/profiles/functional-989491/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-739084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-739084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-739084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-739084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-739084"

                                                
                                                
----------------------- debugLogs end: cilium-739084 [took: 3.764648818s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-739084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-739084
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-608702" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-608702
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard