Test Report: KVM_Linux 20385

                    
                      693540c0733dd51efa818bcfa77a0c31e0bd95f4:2025-02-10:38290
                    
                

Test fail (1/338)

Order failed test Duration
354 TestNetworkPlugins/group/kindnet/Start 89.7
x
+
TestNetworkPlugins/group/kindnet/Start (89.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : exit status 90 (1m29.670150762s)

                                                
                                                
-- stdout --
	* [kindnet-632332] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kindnet-632332" primary control-plane node in "kindnet-632332" cluster
	* Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:26:43.455746  474718 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:26:43.456024  474718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:26:43.456034  474718 out.go:358] Setting ErrFile to fd 2...
	I0210 11:26:43.456038  474718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:26:43.456224  474718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 11:26:43.456864  474718 out.go:352] Setting JSON to false
	I0210 11:26:43.458107  474718 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11353,"bootTime":1739175450,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:26:43.458222  474718 start.go:139] virtualization: kvm guest
	I0210 11:26:43.460408  474718 out.go:177] * [kindnet-632332] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:26:43.461855  474718 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:26:43.461854  474718 notify.go:220] Checking for updates...
	I0210 11:26:43.464631  474718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:26:43.465779  474718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	I0210 11:26:43.467054  474718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 11:26:43.468484  474718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:26:43.469709  474718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:26:43.471678  474718 config.go:182] Loaded profile config "auto-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:26:43.471837  474718 config.go:182] Loaded profile config "default-k8s-diff-port-732540": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:26:43.471996  474718 config.go:182] Loaded profile config "old-k8s-version-617698": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0210 11:26:43.472127  474718 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:26:43.831834  474718 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 11:26:43.832869  474718 start.go:297] selected driver: kvm2
	I0210 11:26:43.832898  474718 start.go:901] validating driver "kvm2" against <nil>
	I0210 11:26:43.832913  474718 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:26:43.833638  474718 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:26:43.833726  474718 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-421267/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:26:43.850167  474718 install.go:137] /home/jenkins/minikube-integration/20385-421267/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:26:43.850214  474718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 11:26:43.850508  474718 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:26:43.850545  474718 cni.go:84] Creating CNI manager for "kindnet"
	I0210 11:26:43.850555  474718 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 11:26:43.850615  474718 start.go:340] cluster config:
	{Name:kindnet-632332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kindnet-632332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0210 11:26:43.850735  474718 iso.go:125] acquiring lock: {Name:mkf9a3fabe49fac7b346f5a0bab423b6773c58da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:26:43.852419  474718 out.go:177] * Starting "kindnet-632332" primary control-plane node in "kindnet-632332" cluster
	I0210 11:26:43.853756  474718 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:26:43.853792  474718 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-421267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 11:26:43.853805  474718 cache.go:56] Caching tarball of preloaded images
	I0210 11:26:43.853907  474718 preload.go:172] Found /home/jenkins/minikube-integration/20385-421267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 11:26:43.853922  474718 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 11:26:43.854027  474718 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/kindnet-632332/config.json ...
	I0210 11:26:43.854051  474718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/kindnet-632332/config.json: {Name:mk6eacfd2f720c854889d64e05cfc240780b4b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:26:43.854192  474718 start.go:360] acquireMachinesLock for kindnet-632332: {Name:mk1c45a3766bfe9d986f9499ed20f8622d2c9eca Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:26:43.854233  474718 start.go:364] duration metric: took 22.935µs to acquireMachinesLock for "kindnet-632332"
	I0210 11:26:43.854255  474718 start.go:93] Provisioning new machine with config: &{Name:kindnet-632332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kindnet-632332 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:26:43.854308  474718 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 11:26:43.855814  474718 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0210 11:26:43.855961  474718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20385-421267/.minikube/bin/docker-machine-driver-kvm2
	I0210 11:26:43.856014  474718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:26:43.871574  474718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0210 11:26:43.872015  474718 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:26:43.872577  474718 main.go:141] libmachine: Using API Version  1
	I0210 11:26:43.872599  474718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:26:43.872934  474718 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:26:43.873148  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetMachineName
	I0210 11:26:43.873328  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:26:43.873495  474718 start.go:159] libmachine.API.Create for "kindnet-632332" (driver="kvm2")
	I0210 11:26:43.873534  474718 client.go:168] LocalClient.Create starting
	I0210 11:26:43.873571  474718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca.pem
	I0210 11:26:43.873621  474718 main.go:141] libmachine: Decoding PEM data...
	I0210 11:26:43.873646  474718 main.go:141] libmachine: Parsing certificate...
	I0210 11:26:43.873734  474718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-421267/.minikube/certs/cert.pem
	I0210 11:26:43.873764  474718 main.go:141] libmachine: Decoding PEM data...
	I0210 11:26:43.873780  474718 main.go:141] libmachine: Parsing certificate...
	I0210 11:26:43.873809  474718 main.go:141] libmachine: Running pre-create checks...
	I0210 11:26:43.873822  474718 main.go:141] libmachine: (kindnet-632332) Calling .PreCreateCheck
	I0210 11:26:43.874188  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetConfigRaw
	I0210 11:26:43.874639  474718 main.go:141] libmachine: Creating machine...
	I0210 11:26:43.874655  474718 main.go:141] libmachine: (kindnet-632332) Calling .Create
	I0210 11:26:43.874778  474718 main.go:141] libmachine: (kindnet-632332) creating KVM machine...
	I0210 11:26:43.874805  474718 main.go:141] libmachine: (kindnet-632332) creating network...
	I0210 11:26:43.876191  474718 main.go:141] libmachine: (kindnet-632332) DBG | found existing default KVM network
	I0210 11:26:43.877198  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:43.877009  474757 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:99:84} reservation:<nil>}
	I0210 11:26:43.877867  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:43.877812  474757 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:61:a8} reservation:<nil>}
	I0210 11:26:43.878986  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:43.878888  474757 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025cdd0}
	I0210 11:26:43.879011  474718 main.go:141] libmachine: (kindnet-632332) DBG | created network xml: 
	I0210 11:26:43.879028  474718 main.go:141] libmachine: (kindnet-632332) DBG | <network>
	I0210 11:26:43.879037  474718 main.go:141] libmachine: (kindnet-632332) DBG |   <name>mk-kindnet-632332</name>
	I0210 11:26:43.879054  474718 main.go:141] libmachine: (kindnet-632332) DBG |   <dns enable='no'/>
	I0210 11:26:43.879060  474718 main.go:141] libmachine: (kindnet-632332) DBG |   
	I0210 11:26:43.879071  474718 main.go:141] libmachine: (kindnet-632332) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0210 11:26:43.879089  474718 main.go:141] libmachine: (kindnet-632332) DBG |     <dhcp>
	I0210 11:26:43.879097  474718 main.go:141] libmachine: (kindnet-632332) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0210 11:26:43.879104  474718 main.go:141] libmachine: (kindnet-632332) DBG |     </dhcp>
	I0210 11:26:43.879111  474718 main.go:141] libmachine: (kindnet-632332) DBG |   </ip>
	I0210 11:26:43.879121  474718 main.go:141] libmachine: (kindnet-632332) DBG |   
	I0210 11:26:43.879129  474718 main.go:141] libmachine: (kindnet-632332) DBG | </network>
	I0210 11:26:43.879135  474718 main.go:141] libmachine: (kindnet-632332) DBG | 
	I0210 11:26:43.884560  474718 main.go:141] libmachine: (kindnet-632332) DBG | trying to create private KVM network mk-kindnet-632332 192.168.61.0/24...
	I0210 11:26:43.964082  474718 main.go:141] libmachine: (kindnet-632332) DBG | private KVM network mk-kindnet-632332 192.168.61.0/24 created
	I0210 11:26:43.964123  474718 main.go:141] libmachine: (kindnet-632332) setting up store path in /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332 ...
	I0210 11:26:43.964138  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:43.964046  474757 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 11:26:43.964156  474718 main.go:141] libmachine: (kindnet-632332) building disk image from file:///home/jenkins/minikube-integration/20385-421267/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 11:26:43.964194  474718 main.go:141] libmachine: (kindnet-632332) Downloading /home/jenkins/minikube-integration/20385-421267/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20385-421267/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:26:44.286427  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:44.286278  474757 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa...
	I0210 11:26:44.386008  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:44.385845  474757 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/kindnet-632332.rawdisk...
	I0210 11:26:44.386033  474718 main.go:141] libmachine: (kindnet-632332) DBG | Writing magic tar header
	I0210 11:26:44.386047  474718 main.go:141] libmachine: (kindnet-632332) DBG | Writing SSH key tar header
	I0210 11:26:44.386058  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:44.385959  474757 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332 ...
	I0210 11:26:44.386072  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332
	I0210 11:26:44.386093  474718 main.go:141] libmachine: (kindnet-632332) setting executable bit set on /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332 (perms=drwx------)
	I0210 11:26:44.386100  474718 main.go:141] libmachine: (kindnet-632332) setting executable bit set on /home/jenkins/minikube-integration/20385-421267/.minikube/machines (perms=drwxr-xr-x)
	I0210 11:26:44.386108  474718 main.go:141] libmachine: (kindnet-632332) setting executable bit set on /home/jenkins/minikube-integration/20385-421267/.minikube (perms=drwxr-xr-x)
	I0210 11:26:44.386123  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-421267/.minikube/machines
	I0210 11:26:44.386133  474718 main.go:141] libmachine: (kindnet-632332) setting executable bit set on /home/jenkins/minikube-integration/20385-421267 (perms=drwxrwxr-x)
	I0210 11:26:44.386253  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 11:26:44.386296  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-421267
	I0210 11:26:44.386308  474718 main.go:141] libmachine: (kindnet-632332) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 11:26:44.386322  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 11:26:44.386335  474718 main.go:141] libmachine: (kindnet-632332) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 11:26:44.386352  474718 main.go:141] libmachine: (kindnet-632332) creating domain...
	I0210 11:26:44.386367  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home/jenkins
	I0210 11:26:44.386378  474718 main.go:141] libmachine: (kindnet-632332) DBG | checking permissions on dir: /home
	I0210 11:26:44.386408  474718 main.go:141] libmachine: (kindnet-632332) DBG | skipping /home - not owner
	I0210 11:26:44.387418  474718 main.go:141] libmachine: (kindnet-632332) define libvirt domain using xml: 
	I0210 11:26:44.387438  474718 main.go:141] libmachine: (kindnet-632332) <domain type='kvm'>
	I0210 11:26:44.387447  474718 main.go:141] libmachine: (kindnet-632332)   <name>kindnet-632332</name>
	I0210 11:26:44.387455  474718 main.go:141] libmachine: (kindnet-632332)   <memory unit='MiB'>3072</memory>
	I0210 11:26:44.387485  474718 main.go:141] libmachine: (kindnet-632332)   <vcpu>2</vcpu>
	I0210 11:26:44.387496  474718 main.go:141] libmachine: (kindnet-632332)   <features>
	I0210 11:26:44.387514  474718 main.go:141] libmachine: (kindnet-632332)     <acpi/>
	I0210 11:26:44.387527  474718 main.go:141] libmachine: (kindnet-632332)     <apic/>
	I0210 11:26:44.387536  474718 main.go:141] libmachine: (kindnet-632332)     <pae/>
	I0210 11:26:44.387545  474718 main.go:141] libmachine: (kindnet-632332)     
	I0210 11:26:44.387555  474718 main.go:141] libmachine: (kindnet-632332)   </features>
	I0210 11:26:44.387564  474718 main.go:141] libmachine: (kindnet-632332)   <cpu mode='host-passthrough'>
	I0210 11:26:44.387569  474718 main.go:141] libmachine: (kindnet-632332)   
	I0210 11:26:44.387576  474718 main.go:141] libmachine: (kindnet-632332)   </cpu>
	I0210 11:26:44.387581  474718 main.go:141] libmachine: (kindnet-632332)   <os>
	I0210 11:26:44.387590  474718 main.go:141] libmachine: (kindnet-632332)     <type>hvm</type>
	I0210 11:26:44.387598  474718 main.go:141] libmachine: (kindnet-632332)     <boot dev='cdrom'/>
	I0210 11:26:44.387610  474718 main.go:141] libmachine: (kindnet-632332)     <boot dev='hd'/>
	I0210 11:26:44.387622  474718 main.go:141] libmachine: (kindnet-632332)     <bootmenu enable='no'/>
	I0210 11:26:44.387632  474718 main.go:141] libmachine: (kindnet-632332)   </os>
	I0210 11:26:44.387642  474718 main.go:141] libmachine: (kindnet-632332)   <devices>
	I0210 11:26:44.387653  474718 main.go:141] libmachine: (kindnet-632332)     <disk type='file' device='cdrom'>
	I0210 11:26:44.387666  474718 main.go:141] libmachine: (kindnet-632332)       <source file='/home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/boot2docker.iso'/>
	I0210 11:26:44.387677  474718 main.go:141] libmachine: (kindnet-632332)       <target dev='hdc' bus='scsi'/>
	I0210 11:26:44.387700  474718 main.go:141] libmachine: (kindnet-632332)       <readonly/>
	I0210 11:26:44.387718  474718 main.go:141] libmachine: (kindnet-632332)     </disk>
	I0210 11:26:44.387732  474718 main.go:141] libmachine: (kindnet-632332)     <disk type='file' device='disk'>
	I0210 11:26:44.387744  474718 main.go:141] libmachine: (kindnet-632332)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 11:26:44.387755  474718 main.go:141] libmachine: (kindnet-632332)       <source file='/home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/kindnet-632332.rawdisk'/>
	I0210 11:26:44.387762  474718 main.go:141] libmachine: (kindnet-632332)       <target dev='hda' bus='virtio'/>
	I0210 11:26:44.387767  474718 main.go:141] libmachine: (kindnet-632332)     </disk>
	I0210 11:26:44.387774  474718 main.go:141] libmachine: (kindnet-632332)     <interface type='network'>
	I0210 11:26:44.387779  474718 main.go:141] libmachine: (kindnet-632332)       <source network='mk-kindnet-632332'/>
	I0210 11:26:44.387786  474718 main.go:141] libmachine: (kindnet-632332)       <model type='virtio'/>
	I0210 11:26:44.387792  474718 main.go:141] libmachine: (kindnet-632332)     </interface>
	I0210 11:26:44.387804  474718 main.go:141] libmachine: (kindnet-632332)     <interface type='network'>
	I0210 11:26:44.387812  474718 main.go:141] libmachine: (kindnet-632332)       <source network='default'/>
	I0210 11:26:44.387817  474718 main.go:141] libmachine: (kindnet-632332)       <model type='virtio'/>
	I0210 11:26:44.387822  474718 main.go:141] libmachine: (kindnet-632332)     </interface>
	I0210 11:26:44.387827  474718 main.go:141] libmachine: (kindnet-632332)     <serial type='pty'>
	I0210 11:26:44.387834  474718 main.go:141] libmachine: (kindnet-632332)       <target port='0'/>
	I0210 11:26:44.387838  474718 main.go:141] libmachine: (kindnet-632332)     </serial>
	I0210 11:26:44.387846  474718 main.go:141] libmachine: (kindnet-632332)     <console type='pty'>
	I0210 11:26:44.387850  474718 main.go:141] libmachine: (kindnet-632332)       <target type='serial' port='0'/>
	I0210 11:26:44.387859  474718 main.go:141] libmachine: (kindnet-632332)     </console>
	I0210 11:26:44.387863  474718 main.go:141] libmachine: (kindnet-632332)     <rng model='virtio'>
	I0210 11:26:44.387869  474718 main.go:141] libmachine: (kindnet-632332)       <backend model='random'>/dev/random</backend>
	I0210 11:26:44.387880  474718 main.go:141] libmachine: (kindnet-632332)     </rng>
	I0210 11:26:44.387885  474718 main.go:141] libmachine: (kindnet-632332)     
	I0210 11:26:44.387892  474718 main.go:141] libmachine: (kindnet-632332)     
	I0210 11:26:44.387899  474718 main.go:141] libmachine: (kindnet-632332)   </devices>
	I0210 11:26:44.387903  474718 main.go:141] libmachine: (kindnet-632332) </domain>
	I0210 11:26:44.387909  474718 main.go:141] libmachine: (kindnet-632332) 
	I0210 11:26:44.392211  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:33:78:3e in network default
	I0210 11:26:44.392797  474718 main.go:141] libmachine: (kindnet-632332) starting domain...
	I0210 11:26:44.392816  474718 main.go:141] libmachine: (kindnet-632332) ensuring networks are active...
	I0210 11:26:44.392824  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:44.393468  474718 main.go:141] libmachine: (kindnet-632332) Ensuring network default is active
	I0210 11:26:44.393760  474718 main.go:141] libmachine: (kindnet-632332) Ensuring network mk-kindnet-632332 is active
	I0210 11:26:44.394234  474718 main.go:141] libmachine: (kindnet-632332) getting domain XML...
	I0210 11:26:44.394861  474718 main.go:141] libmachine: (kindnet-632332) creating domain...
	I0210 11:26:45.659622  474718 main.go:141] libmachine: (kindnet-632332) waiting for IP...
	I0210 11:26:45.660495  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:45.660947  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:45.660992  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:45.660929  474757 retry.go:31] will retry after 282.467706ms: waiting for domain to come up
	I0210 11:26:45.945411  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:45.946139  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:45.946177  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:45.946080  474757 retry.go:31] will retry after 324.268962ms: waiting for domain to come up
	I0210 11:26:46.271725  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:46.272272  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:46.272321  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:46.272264  474757 retry.go:31] will retry after 395.720347ms: waiting for domain to come up
	I0210 11:26:46.670015  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:46.670741  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:46.670768  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:46.670714  474757 retry.go:31] will retry after 445.161383ms: waiting for domain to come up
	I0210 11:26:47.118205  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:47.118772  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:47.118806  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:47.118718  474757 retry.go:31] will retry after 562.830398ms: waiting for domain to come up
	I0210 11:26:47.683225  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:47.683794  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:47.683839  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:47.683747  474757 retry.go:31] will retry after 686.519592ms: waiting for domain to come up
	I0210 11:26:48.371767  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:48.372326  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:48.372416  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:48.372318  474757 retry.go:31] will retry after 989.794516ms: waiting for domain to come up
	I0210 11:26:49.364023  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:49.364525  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:49.364558  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:49.364483  474757 retry.go:31] will retry after 1.239986935s: waiting for domain to come up
	I0210 11:26:50.605936  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:50.606392  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:50.606458  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:50.606361  474757 retry.go:31] will retry after 1.179719832s: waiting for domain to come up
	I0210 11:26:51.787317  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:51.787791  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:51.787822  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:51.787769  474757 retry.go:31] will retry after 2.185371156s: waiting for domain to come up
	I0210 11:26:53.974693  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:53.975190  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:53.975287  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:53.975191  474757 retry.go:31] will retry after 2.441449468s: waiting for domain to come up
	I0210 11:26:56.418153  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:56.418579  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:56.418604  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:56.418558  474757 retry.go:31] will retry after 3.225061807s: waiting for domain to come up
	I0210 11:26:59.644851  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:26:59.645346  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:26:59.645380  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:26:59.645310  474757 retry.go:31] will retry after 3.826651166s: waiting for domain to come up
	I0210 11:27:03.474140  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:03.474599  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find current IP address of domain kindnet-632332 in network mk-kindnet-632332
	I0210 11:27:03.474618  474718 main.go:141] libmachine: (kindnet-632332) DBG | I0210 11:27:03.474572  474757 retry.go:31] will retry after 4.108434587s: waiting for domain to come up
	I0210 11:27:07.586305  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.586773  474718 main.go:141] libmachine: (kindnet-632332) found domain IP: 192.168.61.195
	I0210 11:27:07.586789  474718 main.go:141] libmachine: (kindnet-632332) reserving static IP address...
	I0210 11:27:07.586818  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has current primary IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.587284  474718 main.go:141] libmachine: (kindnet-632332) DBG | unable to find host DHCP lease matching {name: "kindnet-632332", mac: "52:54:00:a0:a6:31", ip: "192.168.61.195"} in network mk-kindnet-632332
	I0210 11:27:07.662763  474718 main.go:141] libmachine: (kindnet-632332) DBG | Getting to WaitForSSH function...
	I0210 11:27:07.662799  474718 main.go:141] libmachine: (kindnet-632332) reserved static IP address 192.168.61.195 for domain kindnet-632332
	I0210 11:27:07.662813  474718 main.go:141] libmachine: (kindnet-632332) waiting for SSH...
	I0210 11:27:07.665387  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.665904  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:07.665934  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.665952  474718 main.go:141] libmachine: (kindnet-632332) DBG | Using SSH client type: external
	I0210 11:27:07.665964  474718 main.go:141] libmachine: (kindnet-632332) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa (-rw-------)
	I0210 11:27:07.665999  474718 main.go:141] libmachine: (kindnet-632332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:27:07.666018  474718 main.go:141] libmachine: (kindnet-632332) DBG | About to run SSH command:
	I0210 11:27:07.666029  474718 main.go:141] libmachine: (kindnet-632332) DBG | exit 0
	I0210 11:27:07.797118  474718 main.go:141] libmachine: (kindnet-632332) DBG | SSH cmd err, output: <nil>: 
	I0210 11:27:07.797464  474718 main.go:141] libmachine: (kindnet-632332) KVM machine creation complete
	I0210 11:27:07.798066  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetConfigRaw
	I0210 11:27:07.798633  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:07.798831  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:07.798971  474718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 11:27:07.799008  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetState
	I0210 11:27:07.800570  474718 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 11:27:07.800584  474718 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 11:27:07.800589  474718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 11:27:07.800594  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:07.803282  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.803684  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:07.803714  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.803863  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:07.804042  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:07.804237  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:07.804372  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:07.804608  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:07.804893  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:07.804911  474718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 11:27:07.920325  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:27:07.920350  474718 main.go:141] libmachine: Detecting the provisioner...
	I0210 11:27:07.920358  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:07.923538  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.923908  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:07.923943  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:07.924084  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:07.924290  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:07.924507  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:07.924667  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:07.924815  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:07.925013  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:07.925027  474718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 11:27:08.037637  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 11:27:08.037709  474718 main.go:141] libmachine: found compatible host: buildroot
	I0210 11:27:08.037715  474718 main.go:141] libmachine: Provisioning with buildroot...
	I0210 11:27:08.037723  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetMachineName
	I0210 11:27:08.037967  474718 buildroot.go:166] provisioning hostname "kindnet-632332"
	I0210 11:27:08.037995  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetMachineName
	I0210 11:27:08.038248  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.041171  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.041557  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.041584  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.041719  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:08.041904  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.042021  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.042176  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:08.042323  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:08.042548  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:08.042563  474718 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-632332 && echo "kindnet-632332" | sudo tee /etc/hostname
	I0210 11:27:08.173643  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-632332
	
	I0210 11:27:08.173668  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.177231  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.177638  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.177673  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.177798  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:08.177995  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.178146  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.178274  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:08.178452  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:08.178637  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:08.178660  474718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-632332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-632332/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-632332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:27:08.308622  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:27:08.308661  474718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-421267/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-421267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-421267/.minikube}
	I0210 11:27:08.308686  474718 buildroot.go:174] setting up certificates
	I0210 11:27:08.308701  474718 provision.go:84] configureAuth start
	I0210 11:27:08.308719  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetMachineName
	I0210 11:27:08.309129  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetIP
	I0210 11:27:08.312536  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.312951  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.312980  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.313219  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.315997  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.316339  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.316366  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.316527  474718 provision.go:143] copyHostCerts
	I0210 11:27:08.316590  474718 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-421267/.minikube/ca.pem, removing ...
	I0210 11:27:08.316605  474718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-421267/.minikube/ca.pem
	I0210 11:27:08.316666  474718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-421267/.minikube/ca.pem (1078 bytes)
	I0210 11:27:08.316746  474718 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-421267/.minikube/cert.pem, removing ...
	I0210 11:27:08.316754  474718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-421267/.minikube/cert.pem
	I0210 11:27:08.316777  474718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-421267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-421267/.minikube/cert.pem (1123 bytes)
	I0210 11:27:08.316829  474718 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-421267/.minikube/key.pem, removing ...
	I0210 11:27:08.316838  474718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-421267/.minikube/key.pem
	I0210 11:27:08.316869  474718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-421267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-421267/.minikube/key.pem (1675 bytes)
	I0210 11:27:08.316937  474718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-421267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca-key.pem org=jenkins.kindnet-632332 san=[127.0.0.1 192.168.61.195 kindnet-632332 localhost minikube]
	I0210 11:27:08.434959  474718 provision.go:177] copyRemoteCerts
	I0210 11:27:08.435029  474718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:27:08.435062  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.437797  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.438101  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.438132  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.438251  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:08.438420  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.438628  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:08.438767  474718 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa Username:docker}
	I0210 11:27:08.523645  474718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-421267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:27:08.548435  474718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-421267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:27:08.570034  474718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-421267/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0210 11:27:08.591631  474718 provision.go:87] duration metric: took 282.909732ms to configureAuth
	I0210 11:27:08.591667  474718 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:27:08.591887  474718 config.go:182] Loaded profile config "kindnet-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:27:08.591916  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:08.592185  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.594681  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.595084  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.595119  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.595230  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:08.595408  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.595562  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.595696  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:08.595855  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:08.596071  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:08.596087  474718 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:27:08.710903  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:27:08.710934  474718 buildroot.go:70] root file system type: tmpfs
	I0210 11:27:08.711079  474718 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:27:08.711107  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.713919  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.714269  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.714298  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.714461  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:08.714674  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.714825  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.714949  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:08.715073  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:08.715284  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:08.715823  474718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 11:27:08.843874  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 11:27:08.843920  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:08.847169  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.847426  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:08.847454  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:08.847664  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:08.847891  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.848056  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:08.848208  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:08.848350  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:08.848558  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:08.848584  474718 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 11:27:10.628407  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 11:27:10.628449  474718 main.go:141] libmachine: Checking connection to Docker...
	I0210 11:27:10.628462  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetURL
	I0210 11:27:10.629737  474718 main.go:141] libmachine: (kindnet-632332) DBG | using libvirt version 6000000
	I0210 11:27:10.632248  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.632569  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.632598  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.632792  474718 main.go:141] libmachine: Docker is up and running!
	I0210 11:27:10.632818  474718 main.go:141] libmachine: Reticulating splines...
	I0210 11:27:10.632827  474718 client.go:171] duration metric: took 26.759280679s to LocalClient.Create
	I0210 11:27:10.632855  474718 start.go:167] duration metric: took 26.759364852s to libmachine.API.Create "kindnet-632332"
	I0210 11:27:10.632868  474718 start.go:293] postStartSetup for "kindnet-632332" (driver="kvm2")
	I0210 11:27:10.632880  474718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:27:10.632906  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:10.633240  474718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:27:10.633278  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:10.635324  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.635650  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.635680  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.635834  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:10.636014  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:10.636153  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:10.636292  474718 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa Username:docker}
	I0210 11:27:10.725178  474718 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:27:10.729272  474718 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:27:10.729302  474718 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-421267/.minikube/addons for local assets ...
	I0210 11:27:10.729388  474718 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-421267/.minikube/files for local assets ...
	I0210 11:27:10.729491  474718 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-421267/.minikube/files/etc/ssl/certs/4285472.pem -> 4285472.pem in /etc/ssl/certs
	I0210 11:27:10.729635  474718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:27:10.738520  474718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-421267/.minikube/files/etc/ssl/certs/4285472.pem --> /etc/ssl/certs/4285472.pem (1708 bytes)
	I0210 11:27:10.761125  474718 start.go:296] duration metric: took 128.219794ms for postStartSetup
	I0210 11:27:10.761196  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetConfigRaw
	I0210 11:27:10.761868  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetIP
	I0210 11:27:10.764742  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.765074  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.765118  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.765342  474718 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/kindnet-632332/config.json ...
	I0210 11:27:10.765608  474718 start.go:128] duration metric: took 26.91127301s to createHost
	I0210 11:27:10.765641  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:10.768053  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.768353  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.768384  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.768511  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:10.768688  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:10.768864  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:10.768996  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:10.769186  474718 main.go:141] libmachine: Using SSH client type: native
	I0210 11:27:10.769393  474718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I0210 11:27:10.769409  474718 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:27:10.881372  474718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739186830.851138771
	
	I0210 11:27:10.881394  474718 fix.go:216] guest clock: 1739186830.851138771
	I0210 11:27:10.881401  474718 fix.go:229] Guest: 2025-02-10 11:27:10.851138771 +0000 UTC Remote: 2025-02-10 11:27:10.765625938 +0000 UTC m=+27.361574663 (delta=85.512833ms)
	I0210 11:27:10.881430  474718 fix.go:200] guest clock delta is within tolerance: 85.512833ms
	I0210 11:27:10.881437  474718 start.go:83] releasing machines lock for "kindnet-632332", held for 27.027192557s
	I0210 11:27:10.881468  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:10.881730  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetIP
	I0210 11:27:10.884701  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.885124  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.885156  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.885351  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:10.885889  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:10.886077  474718 main.go:141] libmachine: (kindnet-632332) Calling .DriverName
	I0210 11:27:10.886155  474718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:27:10.886196  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:10.886315  474718 ssh_runner.go:195] Run: cat /version.json
	I0210 11:27:10.886379  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHHostname
	I0210 11:27:10.889033  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.889321  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.889441  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.889467  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.889624  474718 main.go:141] libmachine: (kindnet-632332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:a6:31", ip: ""} in network mk-kindnet-632332: {Iface:virbr3 ExpiryTime:2025-02-10 12:26:58 +0000 UTC Type:0 Mac:52:54:00:a0:a6:31 Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:kindnet-632332 Clientid:01:52:54:00:a0:a6:31}
	I0210 11:27:10.889645  474718 main.go:141] libmachine: (kindnet-632332) DBG | domain kindnet-632332 has defined IP address 192.168.61.195 and MAC address 52:54:00:a0:a6:31 in network mk-kindnet-632332
	I0210 11:27:10.889653  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:10.889821  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHPort
	I0210 11:27:10.889842  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:10.890029  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHKeyPath
	I0210 11:27:10.890034  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:10.890161  474718 main.go:141] libmachine: (kindnet-632332) Calling .GetSSHUsername
	I0210 11:27:10.890274  474718 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa Username:docker}
	I0210 11:27:10.890271  474718 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/kindnet-632332/id_rsa Username:docker}
	I0210 11:27:10.994290  474718 ssh_runner.go:195] Run: systemctl --version
	I0210 11:27:11.001284  474718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:27:11.007137  474718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:27:11.007221  474718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:27:11.026290  474718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:27:11.026326  474718 start.go:495] detecting cgroup driver to use...
	I0210 11:27:11.026457  474718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:27:11.045727  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 11:27:11.056323  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:27:11.069190  474718 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:27:11.069270  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:27:11.081434  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:27:11.092639  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:27:11.104024  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:27:11.116479  474718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:27:11.127517  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:27:11.139528  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:27:11.153872  474718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:27:11.164522  474718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:27:11.175976  474718 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:27:11.176046  474718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:27:11.191879  474718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:27:11.202054  474718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:27:11.323126  474718 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:27:11.346975  474718 start.go:495] detecting cgroup driver to use...
	I0210 11:27:11.347061  474718 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 11:27:11.370869  474718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:27:11.390460  474718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:27:11.418577  474718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:27:11.432496  474718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:27:11.448241  474718 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:27:11.478266  474718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:27:11.491441  474718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:27:11.509854  474718 ssh_runner.go:195] Run: which cri-dockerd
	I0210 11:27:11.513830  474718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 11:27:11.522674  474718 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 11:27:11.539795  474718 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 11:27:11.655651  474718 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 11:27:11.784041  474718 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 11:27:11.784203  474718 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 11:27:11.800999  474718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:27:11.910933  474718 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:28:12.991590  474718 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.080605727s)
	I0210 11:28:12.991699  474718 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0210 11:28:13.056799  474718 out.go:201] 
	W0210 11:28:13.057999  474718 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 10 11:27:09 kindnet-632332 systemd[1]: Starting Docker Application Container Engine...
	Feb 10 11:27:09 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:09.221031548Z" level=info msg="Starting up"
	Feb 10 11:27:09 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:09.221722240Z" level=info msg="containerd not running, starting managed containerd"
	Feb 10 11:27:09 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:09.222480175Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.250605491Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270463192Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270513447Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270562723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270575212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270649714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270661013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270873441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270903171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270916248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270925397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270991175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.271175699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273301973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273337624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273466461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273489930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273579869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273638272Z" level=info msg="metadata content store policy set" policy=shared
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285252883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285332527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285349705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285363543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285378156Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285476322Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285712793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285876688Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285905771Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285922129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285933729Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285973575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285988426Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286003308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286023556Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286034326Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286044649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286054377Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286083732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286095663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286107307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286118540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286131979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286143112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286153041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286165431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286177109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286189088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286198399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286208030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286219445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286232368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286258240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286269250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286280225Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286394871Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286417096Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286427006Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286440416Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286449110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286461724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286478072Z" level=info msg="NRI interface is disabled by configuration."
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286812702Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286876114Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286911444Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.287082380Z" level=info msg="containerd successfully booted in 0.037440s"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.262219547Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.278636876Z" level=info msg="Loading containers: start."
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.369318905Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.521650305Z" level=info msg="Loading containers: done."
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539233512Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539338046Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539421452Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539617398Z" level=info msg="Daemon has completed initialization"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.596167431Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 10 11:27:10 kindnet-632332 systemd[1]: Started Docker Application Container Engine.
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.597411817Z" level=info msg="API listen on [::]:2376"
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.895897122Z" level=info msg="Processing signal 'terminated'"
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.897483355Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.897666556Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.897908619Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.898102814Z" level=info msg="Daemon shutdown complete"
	Feb 10 11:27:11 kindnet-632332 systemd[1]: Stopping Docker Application Container Engine...
	Feb 10 11:27:12 kindnet-632332 systemd[1]: docker.service: Deactivated successfully.
	Feb 10 11:27:12 kindnet-632332 systemd[1]: Stopped Docker Application Container Engine.
	Feb 10 11:27:12 kindnet-632332 systemd[1]: Starting Docker Application Container Engine...
	Feb 10 11:27:12 kindnet-632332 dockerd[852]: time="2025-02-10T11:27:12.938011827Z" level=info msg="Starting up"
	Feb 10 11:28:12 kindnet-632332 dockerd[852]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 10 11:28:12 kindnet-632332 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 10 11:28:12 kindnet-632332 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 10 11:28:12 kindnet-632332 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 10 11:27:09 kindnet-632332 systemd[1]: Starting Docker Application Container Engine...
	Feb 10 11:27:09 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:09.221031548Z" level=info msg="Starting up"
	Feb 10 11:27:09 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:09.221722240Z" level=info msg="containerd not running, starting managed containerd"
	Feb 10 11:27:09 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:09.222480175Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.250605491Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270463192Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270513447Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270562723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270575212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270649714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270661013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270873441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270903171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270916248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270925397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.270991175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.271175699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273301973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273337624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273466461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273489930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273579869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.273638272Z" level=info msg="metadata content store policy set" policy=shared
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285252883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285332527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285349705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285363543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285378156Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285476322Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285712793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285876688Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285905771Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285922129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285933729Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285973575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.285988426Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286003308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286023556Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286034326Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286044649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286054377Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286083732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286095663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286107307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286118540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286131979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286143112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286153041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286165431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286177109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286189088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286198399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286208030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286219445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286232368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286258240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286269250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286280225Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286394871Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286417096Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286427006Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286440416Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286449110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286461724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286478072Z" level=info msg="NRI interface is disabled by configuration."
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286812702Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286876114Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.286911444Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 10 11:27:09 kindnet-632332 dockerd[532]: time="2025-02-10T11:27:09.287082380Z" level=info msg="containerd successfully booted in 0.037440s"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.262219547Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.278636876Z" level=info msg="Loading containers: start."
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.369318905Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.521650305Z" level=info msg="Loading containers: done."
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539233512Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539338046Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539421452Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.539617398Z" level=info msg="Daemon has completed initialization"
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.596167431Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 10 11:27:10 kindnet-632332 systemd[1]: Started Docker Application Container Engine.
	Feb 10 11:27:10 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:10.597411817Z" level=info msg="API listen on [::]:2376"
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.895897122Z" level=info msg="Processing signal 'terminated'"
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.897483355Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.897666556Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.897908619Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 10 11:27:11 kindnet-632332 dockerd[525]: time="2025-02-10T11:27:11.898102814Z" level=info msg="Daemon shutdown complete"
	Feb 10 11:27:11 kindnet-632332 systemd[1]: Stopping Docker Application Container Engine...
	Feb 10 11:27:12 kindnet-632332 systemd[1]: docker.service: Deactivated successfully.
	Feb 10 11:27:12 kindnet-632332 systemd[1]: Stopped Docker Application Container Engine.
	Feb 10 11:27:12 kindnet-632332 systemd[1]: Starting Docker Application Container Engine...
	Feb 10 11:27:12 kindnet-632332 dockerd[852]: time="2025-02-10T11:27:12.938011827Z" level=info msg="Starting up"
	Feb 10 11:28:12 kindnet-632332 dockerd[852]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 10 11:28:12 kindnet-632332 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 10 11:28:12 kindnet-632332 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 10 11:28:12 kindnet-632332 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0210 11:28:13.058060  474718 out.go:270] * 
	* 
	W0210 11:28:13.059236  474718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:28:13.061215  474718 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/kindnet/Start (89.70s)

                                                
                                    

Test pass (303/338)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.18
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 3.38
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
22 TestOffline 93.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 217.71
29 TestAddons/serial/Volcano 42.88
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.51
35 TestAddons/parallel/Registry 15.36
36 TestAddons/parallel/Ingress 21.64
37 TestAddons/parallel/InspektorGadget 11.96
38 TestAddons/parallel/MetricsServer 6.82
40 TestAddons/parallel/CSI 36.07
41 TestAddons/parallel/Headlamp 18.76
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 11.99
44 TestAddons/parallel/NvidiaDevicePlugin 6.62
45 TestAddons/parallel/Yakd 12.04
47 TestAddons/StoppedEnableDisable 13.59
48 TestCertOptions 58.56
49 TestCertExpiration 287.03
50 TestDockerFlags 99.44
51 TestForceSystemdFlag 65.77
52 TestForceSystemdEnv 103.36
54 TestKVMDriverInstallOrUpdate 5.87
58 TestErrorSpam/setup 46.79
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.23
62 TestErrorSpam/unpause 1.4
63 TestErrorSpam/stop 6.66
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 88.6
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.44
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.26
75 TestFunctional/serial/CacheCmd/cache/add_local 1.26
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.15
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 41.75
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.94
86 TestFunctional/serial/LogsFileCmd 0.98
87 TestFunctional/serial/InvalidService 4.74
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 14.84
91 TestFunctional/parallel/DryRun 0.33
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 8.62
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 43.28
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.46
103 TestFunctional/parallel/MySQL 29.85
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.38
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
113 TestFunctional/parallel/License 0.23
114 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
125 TestFunctional/parallel/ProfileCmd/profile_list 0.4
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
127 TestFunctional/parallel/MountCmd/any-port 15.43
128 TestFunctional/parallel/ServiceCmd/List 0.46
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
131 TestFunctional/parallel/ServiceCmd/Format 0.54
132 TestFunctional/parallel/ServiceCmd/URL 0.29
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.6
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.64
140 TestFunctional/parallel/ImageCommands/Setup 1.63
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.27
142 TestFunctional/parallel/DockerEnv/bash 0.86
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
148 TestFunctional/parallel/MountCmd/specific-port 1.74
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
158 TestGvisorAddon 186.88
161 TestMultiControlPlane/serial/StartCluster 218.24
162 TestMultiControlPlane/serial/DeployApp 5.22
163 TestMultiControlPlane/serial/PingHostFromPods 1.23
164 TestMultiControlPlane/serial/AddWorkerNode 62.77
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
167 TestMultiControlPlane/serial/CopyFile 13.03
168 TestMultiControlPlane/serial/StopSecondaryNode 13.3
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
170 TestMultiControlPlane/serial/RestartSecondaryNode 38.84
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 224.28
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.26
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
175 TestMultiControlPlane/serial/StopCluster 37.6
176 TestMultiControlPlane/serial/RestartCluster 157.15
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 83.48
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
182 TestImageBuild/serial/Setup 52.04
183 TestImageBuild/serial/NormalBuild 1.36
184 TestImageBuild/serial/BuildWithBuildArg 0.88
185 TestImageBuild/serial/BuildWithDockerIgnore 0.75
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.72
190 TestJSONOutput/start/Command 89.51
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.58
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.55
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 12.59
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.2
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 98.12
222 TestMountStart/serial/StartWithMountFirst 32.42
223 TestMountStart/serial/VerifyMountFirst 0.39
224 TestMountStart/serial/StartWithMountSecond 28.3
225 TestMountStart/serial/VerifyMountSecond 0.38
226 TestMountStart/serial/DeleteFirst 0.9
227 TestMountStart/serial/VerifyMountPostDelete 0.39
228 TestMountStart/serial/Stop 2.28
229 TestMountStart/serial/RestartStopped 26.88
230 TestMountStart/serial/VerifyMountPostStop 0.37
233 TestMultiNode/serial/FreshStart2Nodes 144.39
234 TestMultiNode/serial/DeployApp2Nodes 4.25
235 TestMultiNode/serial/PingHostFrom2Pods 0.83
236 TestMultiNode/serial/AddNode 61.62
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.58
239 TestMultiNode/serial/CopyFile 7.33
240 TestMultiNode/serial/StopNode 3.43
241 TestMultiNode/serial/StartAfterStop 42.24
242 TestMultiNode/serial/RestartKeepsNodes 227.17
243 TestMultiNode/serial/DeleteNode 2.34
244 TestMultiNode/serial/StopMultiNode 25.04
245 TestMultiNode/serial/RestartMultiNode 100.52
246 TestMultiNode/serial/ValidateNameConflict 51.18
251 TestPreload 151.26
253 TestScheduledStopUnix 120.7
254 TestSkaffold 123.91
257 TestRunningBinaryUpgrade 226.1
259 TestKubernetesUpgrade 238.78
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 65.8
272 TestPause/serial/Start 90.73
273 TestStoppedBinaryUpgrade/Setup 0.37
274 TestStoppedBinaryUpgrade/Upgrade 146.9
275 TestNoKubernetes/serial/StartWithStopK8s 37.07
276 TestNoKubernetes/serial/Start 34.88
277 TestPause/serial/SecondStartNoReconfiguration 73.49
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
279 TestNoKubernetes/serial/ProfileList 19.05
280 TestNoKubernetes/serial/Stop 2.3
281 TestNoKubernetes/serial/StartNoArgs 28.65
282 TestPause/serial/Pause 0.55
283 TestPause/serial/VerifyStatus 0.25
284 TestPause/serial/Unpause 0.55
285 TestPause/serial/PauseAgain 0.7
286 TestPause/serial/DeletePaused 1.05
287 TestPause/serial/VerifyDeletedResources 14.27
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
302 TestStartStop/group/old-k8s-version/serial/FirstStart 172.61
304 TestStartStop/group/no-preload/serial/FirstStart 128.37
306 TestStartStop/group/embed-certs/serial/FirstStart 120.88
307 TestStartStop/group/no-preload/serial/DeployApp 9.38
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
309 TestStartStop/group/no-preload/serial/Stop 13.36
310 TestStartStop/group/embed-certs/serial/DeployApp 9.3
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 302.43
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
314 TestStartStop/group/embed-certs/serial/Stop 13.34
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
316 TestStartStop/group/embed-certs/serial/SecondStart 310.14
317 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
318 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
319 TestStartStop/group/old-k8s-version/serial/Stop 13.48
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 109.98
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
323 TestStartStop/group/old-k8s-version/serial/SecondStart 564.8
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.3
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.93
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
331 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
332 TestStartStop/group/no-preload/serial/Pause 2.52
334 TestStartStop/group/newest-cni/serial/FirstStart 66.96
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
338 TestStartStop/group/embed-certs/serial/Pause 2.4
339 TestNetworkPlugins/group/auto/Start 64.58
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
342 TestStartStop/group/newest-cni/serial/Stop 8.33
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
344 TestStartStop/group/newest-cni/serial/SecondStart 39.25
345 TestNetworkPlugins/group/auto/KubeletFlags 0.21
346 TestNetworkPlugins/group/auto/NetCatPod 13.25
347 TestNetworkPlugins/group/auto/DNS 0.17
348 TestNetworkPlugins/group/auto/Localhost 0.16
349 TestNetworkPlugins/group/auto/HairPin 0.13
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
353 TestStartStop/group/newest-cni/serial/Pause 2.71
355 TestNetworkPlugins/group/calico/Start 124.06
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
357 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
360 TestNetworkPlugins/group/custom-flannel/Start 85.88
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
364 TestNetworkPlugins/group/calico/KubeletFlags 0.23
365 TestNetworkPlugins/group/calico/NetCatPod 11.24
366 TestNetworkPlugins/group/custom-flannel/DNS 0.15
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
369 TestNetworkPlugins/group/calico/DNS 0.15
370 TestNetworkPlugins/group/calico/Localhost 0.12
371 TestNetworkPlugins/group/calico/HairPin 0.15
372 TestNetworkPlugins/group/false/Start 64.92
373 TestNetworkPlugins/group/enable-default-cni/Start 88.02
374 TestNetworkPlugins/group/flannel/Start 114.68
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
378 TestStartStop/group/old-k8s-version/serial/Pause 2.47
379 TestNetworkPlugins/group/bridge/Start 114.02
380 TestNetworkPlugins/group/false/KubeletFlags 0.24
381 TestNetworkPlugins/group/false/NetCatPod 11.24
382 TestNetworkPlugins/group/false/DNS 0.18
383 TestNetworkPlugins/group/false/Localhost 0.14
384 TestNetworkPlugins/group/false/HairPin 0.13
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
387 TestNetworkPlugins/group/kubenet/Start 75.73
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
391 TestNetworkPlugins/group/flannel/ControllerPod 6.01
392 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
393 TestNetworkPlugins/group/flannel/NetCatPod 10.31
394 TestNetworkPlugins/group/flannel/DNS 0.18
395 TestNetworkPlugins/group/flannel/Localhost 0.15
396 TestNetworkPlugins/group/flannel/HairPin 0.15
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
398 TestNetworkPlugins/group/bridge/NetCatPod 12.27
399 TestNetworkPlugins/group/bridge/DNS 0.15
400 TestNetworkPlugins/group/bridge/Localhost 0.16
401 TestNetworkPlugins/group/bridge/HairPin 0.13
402 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
403 TestNetworkPlugins/group/kubenet/NetCatPod 11.22
404 TestNetworkPlugins/group/kubenet/DNS 0.14
405 TestNetworkPlugins/group/kubenet/Localhost 0.11
406 TestNetworkPlugins/group/kubenet/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (6.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-202700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-202700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (6.80780409s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 10:21:05.443444  428547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0210 10:21:05.443575  428547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-421267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-202700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-202700: exit status 85 (69.538303ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-202700 | jenkins | v1.35.0 | 10 Feb 25 10:20 UTC |          |
	|         | -p download-only-202700        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:20:58
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:20:58.679851  428559 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:20:58.680002  428559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:20:58.680012  428559 out.go:358] Setting ErrFile to fd 2...
	I0210 10:20:58.680016  428559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:20:58.680248  428559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	W0210 10:20:58.680383  428559 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20385-421267/.minikube/config/config.json: open /home/jenkins/minikube-integration/20385-421267/.minikube/config/config.json: no such file or directory
	I0210 10:20:58.680969  428559 out.go:352] Setting JSON to true
	I0210 10:20:58.682002  428559 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7409,"bootTime":1739175450,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:20:58.682122  428559 start.go:139] virtualization: kvm guest
	I0210 10:20:58.684768  428559 out.go:97] [download-only-202700] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0210 10:20:58.684892  428559 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20385-421267/.minikube/cache/preloaded-tarball: no such file or directory
	I0210 10:20:58.684972  428559 notify.go:220] Checking for updates...
	I0210 10:20:58.686635  428559 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:20:58.688568  428559 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:20:58.690128  428559 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	I0210 10:20:58.691619  428559 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 10:20:58.692957  428559 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 10:20:58.695285  428559 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:20:58.695586  428559 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:20:58.732462  428559 out.go:97] Using the kvm2 driver based on user configuration
	I0210 10:20:58.732492  428559 start.go:297] selected driver: kvm2
	I0210 10:20:58.732498  428559 start.go:901] validating driver "kvm2" against <nil>
	I0210 10:20:58.732813  428559 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:20:58.732909  428559 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-421267/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 10:20:58.748219  428559 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 10:20:58.748268  428559 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:20:58.748777  428559 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0210 10:20:58.748977  428559 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 10:20:58.749009  428559 cni.go:84] Creating CNI manager for ""
	I0210 10:20:58.749061  428559 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0210 10:20:58.749146  428559 start.go:340] cluster config:
	{Name:download-only-202700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-202700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:20:58.749334  428559 iso.go:125] acquiring lock: {Name:mkf9a3fabe49fac7b346f5a0bab423b6773c58da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:20:58.751095  428559 out.go:97] Downloading VM boot image ...
	I0210 10:20:58.751126  428559 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20385-421267/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 10:21:01.147753  428559 out.go:97] Starting "download-only-202700" primary control-plane node in "download-only-202700" cluster
	I0210 10:21:01.147777  428559 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0210 10:21:01.170625  428559 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0210 10:21:01.170669  428559 cache.go:56] Caching tarball of preloaded images
	I0210 10:21:01.170842  428559 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0210 10:21:01.172707  428559 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 10:21:01.172740  428559 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0210 10:21:01.197726  428559 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/20385-421267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-202700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-202700"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-202700
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-136362 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-136362 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=kvm2 : (3.380077605s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 10:21:09.201574  428547 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0210 10:21:09.201612  428547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-421267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-136362
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-136362: exit status 85 (63.137756ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-202700 | jenkins | v1.35.0 | 10 Feb 25 10:20 UTC |                     |
	|         | -p download-only-202700        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 Feb 25 10:21 UTC | 10 Feb 25 10:21 UTC |
	| delete  | -p download-only-202700        | download-only-202700 | jenkins | v1.35.0 | 10 Feb 25 10:21 UTC | 10 Feb 25 10:21 UTC |
	| start   | -o=json --download-only        | download-only-136362 | jenkins | v1.35.0 | 10 Feb 25 10:21 UTC |                     |
	|         | -p download-only-136362        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:21:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:21:05.864954  428754 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:21:05.865083  428754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:21:05.865095  428754 out.go:358] Setting ErrFile to fd 2...
	I0210 10:21:05.865101  428754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:21:05.865327  428754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 10:21:05.865891  428754 out.go:352] Setting JSON to true
	I0210 10:21:05.866764  428754 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7416,"bootTime":1739175450,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:21:05.866867  428754 start.go:139] virtualization: kvm guest
	I0210 10:21:05.869085  428754 out.go:97] [download-only-136362] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 10:21:05.869254  428754 notify.go:220] Checking for updates...
	I0210 10:21:05.870480  428754 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:21:05.871836  428754 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:21:05.873284  428754 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	I0210 10:21:05.874619  428754 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 10:21:05.875925  428754 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-136362 host does not exist
	  To start a cluster, run: "minikube start -p download-only-136362"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-136362
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 10:21:09.803411  428547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-332023 --alsologtostderr --binary-mirror http://127.0.0.1:34305 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-332023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-332023
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (93.53s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-786963 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-786963 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m32.141069313s)
helpers_test.go:175: Cleaning up "offline-docker-786963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-786963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-786963: (1.390550399s)
--- PASS: TestOffline (93.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-830295
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-830295: exit status 85 (60.287383ms)

                                                
                                                
-- stdout --
	* Profile "addons-830295" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-830295"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-830295
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-830295: exit status 85 (58.99015ms)

                                                
                                                
-- stdout --
	* Profile "addons-830295" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-830295"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (217.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-830295 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-830295 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m37.705759452s)
--- PASS: TestAddons/Setup (217.71s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 30.279829ms
addons_test.go:815: volcano-admission stabilized in 30.321188ms
addons_test.go:807: volcano-scheduler stabilized in 30.375712ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-fz4nt" [ba8bd3fc-f6c7-415f-8dab-494cd4dd23d6] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003688484s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-tzklq" [a41de8ef-5831-44ae-a3c0-3e53a0c24745] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003941219s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-4qr9c" [2a52b8dd-4622-47ec-ad27-a586b1c4398e] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004148244s
addons_test.go:842: (dbg) Run:  kubectl --context addons-830295 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-830295 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-830295 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [6c2c3755-bfac-4b98-99be-adb9d0a9d54a] Pending
helpers_test.go:344: "test-job-nginx-0" [6c2c3755-bfac-4b98-99be-adb9d0a9d54a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [6c2c3755-bfac-4b98-99be-adb9d0a9d54a] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.003394107s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable volcano --alsologtostderr -v=1: (11.427027964s)
--- PASS: TestAddons/serial/Volcano (42.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-830295 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-830295 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-830295 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-830295 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [01e282ec-29b2-4f48-b007-be99fab39d2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [01e282ec-29b2-4f48-b007-be99fab39d2e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004244564s
addons_test.go:633: (dbg) Run:  kubectl --context addons-830295 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-830295 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-830295 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.975166ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-jsf5n" [704a078e-e483-446e-83e4-de365c91df52] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003457363s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2s85n" [2240d8f1-aeb5-4349-9e67-71700577ccee] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003934907s
addons_test.go:331: (dbg) Run:  kubectl --context addons-830295 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-830295 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-830295 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.686967888s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 ip
2025/02/10 10:26:03 [DEBUG] GET http://192.168.39.90:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.36s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-830295 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-830295 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-830295 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [35a5c240-0d14-4653-84b1-5dc0fe51fb93] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [35a5c240-0d14-4653-84b1-5dc0fe51fb93] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003908905s
I0210 10:26:27.465323  428547 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-830295 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.90
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable ingress-dns --alsologtostderr -v=1: (1.64301357s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable ingress --alsologtostderr -v=1: (7.669355118s)
--- PASS: TestAddons/parallel/Ingress (21.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hhxhr" [1b7abe7c-2f4e-4793-be23-887fa85d793f] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003461546s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable inspektor-gadget --alsologtostderr -v=1: (5.955486947s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.711634ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5bm2r" [da3250d5-adfd-46b9-b7ff-c13708c72dbc] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003284184s
addons_test.go:402: (dbg) Run:  kubectl --context addons-830295 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0210 10:25:48.543195  428547 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0210 10:25:48.547089  428547 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 10:25:48.547112  428547 kapi.go:107] duration metric: took 3.93549ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.95363ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-830295 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-830295 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [edf458ea-bc17-40ad-8b42-a1cb03a2988c] Pending
helpers_test.go:344: "task-pv-pod" [edf458ea-bc17-40ad-8b42-a1cb03a2988c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [edf458ea-bc17-40ad-8b42-a1cb03a2988c] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004372815s
addons_test.go:511: (dbg) Run:  kubectl --context addons-830295 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-830295 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-830295 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-830295 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-830295 delete pod task-pv-pod: (1.205663942s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-830295 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-830295 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-830295 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [698fe136-fd08-476c-a5a2-9045ebd5d1a0] Pending
helpers_test.go:344: "task-pv-pod-restore" [698fe136-fd08-476c-a5a2-9045ebd5d1a0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [698fe136-fd08-476c-a5a2-9045ebd5d1a0] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003972136s
addons_test.go:553: (dbg) Run:  kubectl --context addons-830295 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-830295 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-830295 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.657077594s)
--- PASS: TestAddons/parallel/CSI (36.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-830295 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-4fhbf" [3036d621-b159-46e2-a5f2-b61a4869d3ca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-4fhbf" [3036d621-b159-46e2-a5f2-b61a4869d3ca] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-4fhbf" [3036d621-b159-46e2-a5f2-b61a4869d3ca] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003257569s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable headlamp --alsologtostderr -v=1: (5.859026817s)
--- PASS: TestAddons/parallel/Headlamp (18.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-jzjt7" [0173f65d-6300-4646-a9ea-2bac6ada69bc] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004892274s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-830295 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-830295 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [258973c1-dedd-4556-8fe7-41ca4d2d1438] Pending
helpers_test.go:344: "test-local-path" [258973c1-dedd-4556-8fe7-41ca4d2d1438] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [258973c1-dedd-4556-8fe7-41ca4d2d1438] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [258973c1-dedd-4556-8fe7-41ca4d2d1438] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003670714s
addons_test.go:906: (dbg) Run:  kubectl --context addons-830295 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 ssh "cat /opt/local-path-provisioner/pvc-e5825276-6901-4006-bf84-b7bfe21805a4_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-830295 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-830295 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-v2hzl" [57e6c3a6-2eac-45c5-8870-c2683e7b8c36] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004349137s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-hfm72" [1d101b66-0e0d-4bb8-aa6f-7be28fe72c70] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009117056s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-830295 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-830295 addons disable yakd --alsologtostderr -v=1: (6.032155556s)
--- PASS: TestAddons/parallel/Yakd (12.04s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-830295
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-830295: (13.296719658s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-830295
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-830295
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-830295
--- PASS: TestAddons/StoppedEnableDisable (13.59s)

                                                
                                    
x
+
TestCertOptions (58.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-477842 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-477842 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (57.051154255s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-477842 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-477842 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-477842 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-477842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-477842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-477842: (1.04177863s)
--- PASS: TestCertOptions (58.56s)

                                                
                                    
x
+
TestCertExpiration (287.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-120520 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-120520 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m17.501881476s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-120520 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-120520 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (28.504931811s)
helpers_test.go:175: Cleaning up "cert-expiration-120520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-120520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-120520: (1.019853356s)
--- PASS: TestCertExpiration (287.03s)

                                                
                                    
x
+
TestDockerFlags (99.44s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-584050 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-584050 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m36.746439948s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-584050 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-584050 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-584050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-584050
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-584050: (2.088910799s)
--- PASS: TestDockerFlags (99.44s)

                                                
                                    
x
+
TestForceSystemdFlag (65.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-804207 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-804207 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m4.685336595s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-804207 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-804207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-804207
--- PASS: TestForceSystemdFlag (65.77s)

                                                
                                    
x
+
TestForceSystemdEnv (103.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-098484 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E0210 11:10:52.898193  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-098484 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m42.073649282s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-098484 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-098484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-098484
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-098484: (1.007783363s)
--- PASS: TestForceSystemdEnv (103.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.87s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.87s)

                                                
                                    
x
+
TestErrorSpam/setup (46.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-011756 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-011756 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-011756 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-011756 --driver=kvm2 : (46.790418807s)
--- PASS: TestErrorSpam/setup (46.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 pause
--- PASS: TestErrorSpam/pause (1.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (6.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 stop: (3.49604074s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 stop: (1.396515246s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-011756 --log_dir /tmp/nospam-011756 stop: (1.768623364s)
--- PASS: TestErrorSpam/stop (6.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20385-421267/.minikube/files/etc/test/nested/copy/428547/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607439 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-607439 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m28.603317271s)
--- PASS: TestFunctional/serial/StartWithProxy (88.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 10:29:18.301072  428547 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607439 --alsologtostderr -v=8
E0210 10:29:48.196262  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.202659  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.213998  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.235368  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.276835  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.358320  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.519885  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:48.841968  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:49.483399  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:50.764782  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:53.326787  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:29:58.448637  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-607439 --alsologtostderr -v=8: (40.443533745s)
functional_test.go:680: soft start took 40.444536149s for "functional-607439" cluster.
I0210 10:29:58.745077  428547 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (40.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-607439 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-607439 /tmp/TestFunctionalserialCacheCmdcacheadd_local795430470/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cache add minikube-local-cache-test:functional-607439
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cache delete minikube-local-cache-test:functional-607439
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-607439
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.661359ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 kubectl -- --context functional-607439 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-607439 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607439 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 10:30:08.690059  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:30:29.172137  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-607439 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.747073529s)
functional_test.go:778: restart took 41.747243072s for "functional-607439" cluster.
I0210 10:30:45.951762  428547 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-607439 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 logs --file /tmp/TestFunctionalserialLogsFileCmd53815347/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-607439 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-607439
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-607439: exit status 115 (294.44838ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.57:30976 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-607439 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-607439 delete -f testdata/invalidsvc.yaml: (1.248496542s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 config get cpus: exit status 14 (80.445003ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 config get cpus: exit status 14 (53.00954ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-607439 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-607439 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 435126: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607439 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-607439 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (164.433776ms)

                                                
                                                
-- stdout --
	* [functional-607439] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:30:53.300442  434651 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:30:53.300579  434651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:30:53.300590  434651 out.go:358] Setting ErrFile to fd 2...
	I0210 10:30:53.300596  434651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:30:53.300769  434651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 10:30:53.301346  434651 out.go:352] Setting JSON to false
	I0210 10:30:53.302271  434651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8003,"bootTime":1739175450,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:30:53.302372  434651 start.go:139] virtualization: kvm guest
	I0210 10:30:53.304717  434651 out.go:177] * [functional-607439] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 10:30:53.306252  434651 notify.go:220] Checking for updates...
	I0210 10:30:53.306292  434651 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:30:53.307762  434651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:30:53.309358  434651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	I0210 10:30:53.311060  434651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 10:30:53.312470  434651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 10:30:53.313819  434651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:30:53.315709  434651 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:30:53.316310  434651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:30:53.316387  434651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:30:53.333149  434651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0210 10:30:53.333581  434651 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:30:53.334155  434651 main.go:141] libmachine: Using API Version  1
	I0210 10:30:53.334186  434651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:30:53.334649  434651 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:30:53.334897  434651 main.go:141] libmachine: (functional-607439) Calling .DriverName
	I0210 10:30:53.335257  434651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:30:53.335724  434651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:30:53.335777  434651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:30:53.351374  434651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0210 10:30:53.351736  434651 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:30:53.352241  434651 main.go:141] libmachine: Using API Version  1
	I0210 10:30:53.352272  434651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:30:53.352614  434651 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:30:53.352812  434651 main.go:141] libmachine: (functional-607439) Calling .DriverName
	I0210 10:30:53.389965  434651 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 10:30:53.391136  434651 start.go:297] selected driver: kvm2
	I0210 10:30:53.391151  434651 start.go:901] validating driver "kvm2" against &{Name:functional-607439 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-607439 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:30:53.391262  434651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:30:53.393635  434651 out.go:201] 
	W0210 10:30:53.394951  434651 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0210 10:30:53.396130  434651 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607439 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-607439 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-607439 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (170.59422ms)

                                                
                                                
-- stdout --
	* [functional-607439] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:30:53.124067  434593 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:30:53.124222  434593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:30:53.124236  434593 out.go:358] Setting ErrFile to fd 2...
	I0210 10:30:53.124243  434593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:30:53.124707  434593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 10:30:53.125440  434593 out.go:352] Setting JSON to false
	I0210 10:30:53.126823  434593 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8003,"bootTime":1739175450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:30:53.126958  434593 start.go:139] virtualization: kvm guest
	I0210 10:30:53.128975  434593 out.go:177] * [functional-607439] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 10:30:53.130457  434593 notify.go:220] Checking for updates...
	I0210 10:30:53.131650  434593 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:30:53.132965  434593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:30:53.134276  434593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	I0210 10:30:53.135528  434593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	I0210 10:30:53.137122  434593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 10:30:53.138616  434593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:30:53.140552  434593 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:30:53.140955  434593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:30:53.141002  434593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:30:53.162349  434593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
	I0210 10:30:53.162885  434593 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:30:53.163541  434593 main.go:141] libmachine: Using API Version  1
	I0210 10:30:53.163562  434593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:30:53.163871  434593 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:30:53.164167  434593 main.go:141] libmachine: (functional-607439) Calling .DriverName
	I0210 10:30:53.164442  434593 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:30:53.164833  434593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:30:53.164894  434593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:30:53.183673  434593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0210 10:30:53.184159  434593 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:30:53.184826  434593 main.go:141] libmachine: Using API Version  1
	I0210 10:30:53.184857  434593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:30:53.185411  434593 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:30:53.185665  434593 main.go:141] libmachine: (functional-607439) Calling .DriverName
	I0210 10:30:53.227306  434593 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0210 10:30:53.228524  434593 start.go:297] selected driver: kvm2
	I0210 10:30:53.228544  434593 start.go:901] validating driver "kvm2" against &{Name:functional-607439 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-607439 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:30:53.228684  434593 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:30:53.231132  434593 out.go:201] 
	W0210 10:30:53.232306  434593 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 10:30:53.233454  434593 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-607439 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-607439 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-xxs56" [414165e2-59d6-45ce-9671-effdad399e83] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-xxs56" [414165e2-59d6-45ce-9671-effdad399e83] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005087224s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.57:31668
functional_test.go:1692: http://192.168.39.57:31668: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-xxs56

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.57:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.57:31668
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [64f57075-0f78-439a-b957-0f55789c2a1f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00349393s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-607439 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-607439 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-607439 get pvc myclaim -o=json
I0210 10:30:59.990531  428547 retry.go:31] will retry after 2.338369747s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2e3b82bc-7d3b-4db2-9d8c-cb30183fdd3c ResourceVersion:796 Generation:0 CreationTimestamp:2025-02-10 10:30:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0007ed310 VolumeMode:0xc0007ed330 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-607439 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-607439 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0fb48169-0675-4dbb-bc37-5747c08fc8ec] Pending
helpers_test.go:344: "sp-pod" [0fb48169-0675-4dbb-bc37-5747c08fc8ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0fb48169-0675-4dbb-bc37-5747c08fc8ec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003598958s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-607439 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-607439 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-607439 delete -f testdata/storage-provisioner/pod.yaml: (2.082020308s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-607439 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1258bd28-aaf8-42c3-8d77-d45f75bba113] Pending
helpers_test.go:344: "sp-pod" [1258bd28-aaf8-42c3-8d77-d45f75bba113] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1258bd28-aaf8-42c3-8d77-d45f75bba113] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003263618s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-607439 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.28s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh -n functional-607439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cp functional-607439:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd266223381/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh -n functional-607439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh -n functional-607439 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-607439 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-vppdf" [237ab0c7-1be1-4686-a283-a2ba248ea6cd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-vppdf" [237ab0c7-1be1-4686-a283-a2ba248ea6cd] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.003549246s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-607439 exec mysql-58ccfd96bb-vppdf -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-607439 exec mysql-58ccfd96bb-vppdf -- mysql -ppassword -e "show databases;": exit status 1 (154.654372ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:31:37.298715  428547 retry.go:31] will retry after 1.132894845s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-607439 exec mysql-58ccfd96bb-vppdf -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-607439 exec mysql-58ccfd96bb-vppdf -- mysql -ppassword -e "show databases;": exit status 1 (121.610487ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:31:38.553882  428547 retry.go:31] will retry after 2.123769884s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-607439 exec mysql-58ccfd96bb-vppdf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/428547/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /etc/test/nested/copy/428547/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/428547.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /etc/ssl/certs/428547.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/428547.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /usr/share/ca-certificates/428547.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/4285472.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /etc/ssl/certs/4285472.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/4285472.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /usr/share/ca-certificates/4285472.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-607439 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh "sudo systemctl is-active crio": exit status 1 (212.66392ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-607439 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-607439 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-kcgbb" [18ca236b-4a3b-44fd-a220-db51517299d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-kcgbb" [18ca236b-4a3b-44fd-a220-db51517299d3] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003882127s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "347.727978ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "51.568942ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "334.636334ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "62.852385ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdany-port2306678774/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739183455947469277" to /tmp/TestFunctionalparallelMountCmdany-port2306678774/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739183455947469277" to /tmp/TestFunctionalparallelMountCmdany-port2306678774/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739183455947469277" to /tmp/TestFunctionalparallelMountCmdany-port2306678774/001/test-1739183455947469277
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.651942ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:30:56.164472  428547 retry.go:31] will retry after 384.134195ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 10 10:30 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 10 10:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 10 10:30 test-1739183455947469277
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh cat /mount-9p/test-1739183455947469277
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-607439 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e5f1f907-2975-4326-974c-b6ae678ca32e] Pending
helpers_test.go:344: "busybox-mount" [e5f1f907-2975-4326-974c-b6ae678ca32e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e5f1f907-2975-4326-974c-b6ae678ca32e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e5f1f907-2975-4326-974c-b6ae678ca32e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.003424994s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-607439 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdany-port2306678774/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 service list -o json
functional_test.go:1511: Took "444.88418ms" to run "out/minikube-linux-amd64 -p functional-607439 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.57:32226
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.57:32226
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607439 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-607439
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-607439
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607439 image ls --format short --alsologtostderr:
I0210 10:31:16.023302  436899 out.go:345] Setting OutFile to fd 1 ...
I0210 10:31:16.023446  436899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:16.023458  436899 out.go:358] Setting ErrFile to fd 2...
I0210 10:31:16.023465  436899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:16.023656  436899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
I0210 10:31:16.024292  436899 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:16.024397  436899 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:16.024806  436899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:16.024880  436899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:16.040405  436899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
I0210 10:31:16.040874  436899 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:16.041552  436899 main.go:141] libmachine: Using API Version  1
I0210 10:31:16.041576  436899 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:16.041948  436899 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:16.042186  436899 main.go:141] libmachine: (functional-607439) Calling .GetState
I0210 10:31:16.043948  436899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:16.043999  436899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:16.059094  436899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
I0210 10:31:16.059502  436899 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:16.060030  436899 main.go:141] libmachine: Using API Version  1
I0210 10:31:16.060057  436899 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:16.060392  436899 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:16.060598  436899 main.go:141] libmachine: (functional-607439) Calling .DriverName
I0210 10:31:16.060827  436899 ssh_runner.go:195] Run: systemctl --version
I0210 10:31:16.060866  436899 main.go:141] libmachine: (functional-607439) Calling .GetSSHHostname
I0210 10:31:16.063462  436899 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:16.063809  436899 main.go:141] libmachine: (functional-607439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ff:fb", ip: ""} in network mk-functional-607439: {Iface:virbr1 ExpiryTime:2025-02-10 11:28:04 +0000 UTC Type:0 Mac:52:54:00:35:ff:fb Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-607439 Clientid:01:52:54:00:35:ff:fb}
I0210 10:31:16.063841  436899 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined IP address 192.168.39.57 and MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:16.063996  436899 main.go:141] libmachine: (functional-607439) Calling .GetSSHPort
I0210 10:31:16.064163  436899 main.go:141] libmachine: (functional-607439) Calling .GetSSHKeyPath
I0210 10:31:16.064327  436899 main.go:141] libmachine: (functional-607439) Calling .GetSSHUsername
I0210 10:31:16.064458  436899 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/functional-607439/id_rsa Username:docker}
I0210 10:31:16.138952  436899 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0210 10:31:16.161950  436899 main.go:141] libmachine: Making call to close driver server
I0210 10:31:16.161970  436899 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:16.162285  436899 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
I0210 10:31:16.162290  436899 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:16.162340  436899 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:16.162352  436899 main.go:141] libmachine: Making call to close driver server
I0210 10:31:16.162359  436899 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:16.162600  436899 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:16.162618  436899 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:16.162634  436899 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607439 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.32.1           | 2b0d6572d062c | 69.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-607439 | 8b50488a3c990 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-607439 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | latest            | 97662d24417b3 | 192MB  |
| registry.k8s.io/kube-apiserver              | v1.32.1           | 95c0bda56fc4d | 97MB   |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| localhost/my-image                          | functional-607439 | c6d63f22b2e89 | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1           | 019ee182b58e2 | 89.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.1           | e29f9c7391fd9 | 94MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607439 image ls --format table --alsologtostderr:
I0210 10:31:20.303486  437064 out.go:345] Setting OutFile to fd 1 ...
I0210 10:31:20.303775  437064 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:20.303785  437064 out.go:358] Setting ErrFile to fd 2...
I0210 10:31:20.303790  437064 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:20.304013  437064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
I0210 10:31:20.304641  437064 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:20.304747  437064 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:20.305172  437064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:20.305248  437064 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:20.321600  437064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
I0210 10:31:20.322138  437064 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:20.322722  437064 main.go:141] libmachine: Using API Version  1
I0210 10:31:20.322744  437064 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:20.323119  437064 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:20.323349  437064 main.go:141] libmachine: (functional-607439) Calling .GetState
I0210 10:31:20.325349  437064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:20.325395  437064 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:20.340835  437064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
I0210 10:31:20.341377  437064 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:20.341866  437064 main.go:141] libmachine: Using API Version  1
I0210 10:31:20.341889  437064 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:20.342323  437064 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:20.342548  437064 main.go:141] libmachine: (functional-607439) Calling .DriverName
I0210 10:31:20.342745  437064 ssh_runner.go:195] Run: systemctl --version
I0210 10:31:20.342769  437064 main.go:141] libmachine: (functional-607439) Calling .GetSSHHostname
I0210 10:31:20.345918  437064 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:20.346362  437064 main.go:141] libmachine: (functional-607439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ff:fb", ip: ""} in network mk-functional-607439: {Iface:virbr1 ExpiryTime:2025-02-10 11:28:04 +0000 UTC Type:0 Mac:52:54:00:35:ff:fb Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-607439 Clientid:01:52:54:00:35:ff:fb}
I0210 10:31:20.346385  437064 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined IP address 192.168.39.57 and MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:20.346595  437064 main.go:141] libmachine: (functional-607439) Calling .GetSSHPort
I0210 10:31:20.346769  437064 main.go:141] libmachine: (functional-607439) Calling .GetSSHKeyPath
I0210 10:31:20.346940  437064 main.go:141] libmachine: (functional-607439) Calling .GetSSHUsername
I0210 10:31:20.347076  437064 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/functional-607439/id_rsa Username:docker}
I0210 10:31:20.430033  437064 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0210 10:31:20.459771  437064 main.go:141] libmachine: Making call to close driver server
I0210 10:31:20.459787  437064 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:20.460089  437064 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:20.460110  437064 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:20.460102  437064 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
I0210 10:31:20.460120  437064 main.go:141] libmachine: Making call to close driver server
I0210 10:31:20.460128  437064 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:20.460367  437064 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:20.460396  437064 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
I0210 10:31:20.460410  437064 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607439 image ls --format json --alsologtostderr:
[{"id":"c6d63f22b2e89a68b32fddf803316b02360d4852041af58503759a69782d034a","repoDigests":[],"repoTags":["localhost/my-image:functional-607439"],"size":"1240000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"97000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-607439"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"8b50488a3c9902214e6f57e5afcccb9c38b3a1ebb7203ba99749e5c3adf63c51","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-607439"],"size":"30"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"69600000"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDig
ests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"89700000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"94000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],
"size":"742000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607439 image ls --format json --alsologtostderr:
I0210 10:31:20.089567  437040 out.go:345] Setting OutFile to fd 1 ...
I0210 10:31:20.089855  437040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:20.089867  437040 out.go:358] Setting ErrFile to fd 2...
I0210 10:31:20.089871  437040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:20.090096  437040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
I0210 10:31:20.090745  437040 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:20.090843  437040 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:20.091195  437040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:20.091259  437040 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:20.106830  437040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36081
I0210 10:31:20.107418  437040 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:20.108090  437040 main.go:141] libmachine: Using API Version  1
I0210 10:31:20.108113  437040 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:20.108479  437040 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:20.108694  437040 main.go:141] libmachine: (functional-607439) Calling .GetState
I0210 10:31:20.110607  437040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:20.110656  437040 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:20.126345  437040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
I0210 10:31:20.126864  437040 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:20.127527  437040 main.go:141] libmachine: Using API Version  1
I0210 10:31:20.127565  437040 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:20.127901  437040 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:20.128120  437040 main.go:141] libmachine: (functional-607439) Calling .DriverName
I0210 10:31:20.128330  437040 ssh_runner.go:195] Run: systemctl --version
I0210 10:31:20.128353  437040 main.go:141] libmachine: (functional-607439) Calling .GetSSHHostname
I0210 10:31:20.131277  437040 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:20.131744  437040 main.go:141] libmachine: (functional-607439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ff:fb", ip: ""} in network mk-functional-607439: {Iface:virbr1 ExpiryTime:2025-02-10 11:28:04 +0000 UTC Type:0 Mac:52:54:00:35:ff:fb Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-607439 Clientid:01:52:54:00:35:ff:fb}
I0210 10:31:20.131766  437040 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined IP address 192.168.39.57 and MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:20.131913  437040 main.go:141] libmachine: (functional-607439) Calling .GetSSHPort
I0210 10:31:20.132093  437040 main.go:141] libmachine: (functional-607439) Calling .GetSSHKeyPath
I0210 10:31:20.132250  437040 main.go:141] libmachine: (functional-607439) Calling .GetSSHUsername
I0210 10:31:20.132381  437040 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/functional-607439/id_rsa Username:docker}
I0210 10:31:20.211081  437040 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0210 10:31:20.241457  437040 main.go:141] libmachine: Making call to close driver server
I0210 10:31:20.241470  437040 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:20.241768  437040 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:20.241783  437040 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:20.241791  437040 main.go:141] libmachine: Making call to close driver server
I0210 10:31:20.241799  437040 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:20.242068  437040 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:20.242085  437040 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-607439 image ls --format yaml --alsologtostderr:
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "89700000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "69600000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "97000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 8b50488a3c9902214e6f57e5afcccb9c38b3a1ebb7203ba99749e5c3adf63c51
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-607439
size: "30"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "94000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-607439
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607439 image ls --format yaml --alsologtostderr:
I0210 10:31:16.215858  436923 out.go:345] Setting OutFile to fd 1 ...
I0210 10:31:16.215971  436923 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:16.215981  436923 out.go:358] Setting ErrFile to fd 2...
I0210 10:31:16.215986  436923 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:16.216219  436923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
I0210 10:31:16.216876  436923 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:16.216990  436923 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:16.217440  436923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:16.217499  436923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:16.233125  436923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
I0210 10:31:16.233602  436923 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:16.234189  436923 main.go:141] libmachine: Using API Version  1
I0210 10:31:16.234215  436923 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:16.234558  436923 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:16.234778  436923 main.go:141] libmachine: (functional-607439) Calling .GetState
I0210 10:31:16.236398  436923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:16.236441  436923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:16.251529  436923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
I0210 10:31:16.251975  436923 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:16.252453  436923 main.go:141] libmachine: Using API Version  1
I0210 10:31:16.252473  436923 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:16.252846  436923 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:16.253044  436923 main.go:141] libmachine: (functional-607439) Calling .DriverName
I0210 10:31:16.253321  436923 ssh_runner.go:195] Run: systemctl --version
I0210 10:31:16.253355  436923 main.go:141] libmachine: (functional-607439) Calling .GetSSHHostname
I0210 10:31:16.256294  436923 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:16.256682  436923 main.go:141] libmachine: (functional-607439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ff:fb", ip: ""} in network mk-functional-607439: {Iface:virbr1 ExpiryTime:2025-02-10 11:28:04 +0000 UTC Type:0 Mac:52:54:00:35:ff:fb Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-607439 Clientid:01:52:54:00:35:ff:fb}
I0210 10:31:16.256717  436923 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined IP address 192.168.39.57 and MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:16.256867  436923 main.go:141] libmachine: (functional-607439) Calling .GetSSHPort
I0210 10:31:16.257043  436923 main.go:141] libmachine: (functional-607439) Calling .GetSSHKeyPath
I0210 10:31:16.257216  436923 main.go:141] libmachine: (functional-607439) Calling .GetSSHUsername
I0210 10:31:16.257371  436923 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/functional-607439/id_rsa Username:docker}
I0210 10:31:16.338025  436923 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0210 10:31:16.383016  436923 main.go:141] libmachine: Making call to close driver server
I0210 10:31:16.383031  436923 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:16.383310  436923 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:16.383337  436923 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:16.383346  436923 main.go:141] libmachine: Making call to close driver server
I0210 10:31:16.383354  436923 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
I0210 10:31:16.383359  436923 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:16.383629  436923 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:16.383650  436923 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:16.383664  436923 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh pgrep buildkitd: exit status 1 (203.203049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image build -t localhost/my-image:functional-607439 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-607439 image build -t localhost/my-image:functional-607439 testdata/build --alsologtostderr: (3.224260313s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-607439 image build -t localhost/my-image:functional-607439 testdata/build --alsologtostderr:
I0210 10:31:16.649028  436977 out.go:345] Setting OutFile to fd 1 ...
I0210 10:31:16.650152  436977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:16.650171  436977 out.go:358] Setting ErrFile to fd 2...
I0210 10:31:16.650178  436977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:31:16.650486  436977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
I0210 10:31:16.651355  436977 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:16.651930  436977 config.go:182] Loaded profile config "functional-607439": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:31:16.652401  436977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:16.652453  436977 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:16.668660  436977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35837
I0210 10:31:16.669209  436977 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:16.669878  436977 main.go:141] libmachine: Using API Version  1
I0210 10:31:16.669911  436977 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:16.670320  436977 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:16.670553  436977 main.go:141] libmachine: (functional-607439) Calling .GetState
I0210 10:31:16.672886  436977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0210 10:31:16.672936  436977 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:31:16.689750  436977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
I0210 10:31:16.690284  436977 main.go:141] libmachine: () Calling .GetVersion
I0210 10:31:16.690886  436977 main.go:141] libmachine: Using API Version  1
I0210 10:31:16.690913  436977 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:31:16.691243  436977 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:31:16.691479  436977 main.go:141] libmachine: (functional-607439) Calling .DriverName
I0210 10:31:16.691686  436977 ssh_runner.go:195] Run: systemctl --version
I0210 10:31:16.691715  436977 main.go:141] libmachine: (functional-607439) Calling .GetSSHHostname
I0210 10:31:16.694539  436977 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:16.695054  436977 main.go:141] libmachine: (functional-607439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ff:fb", ip: ""} in network mk-functional-607439: {Iface:virbr1 ExpiryTime:2025-02-10 11:28:04 +0000 UTC Type:0 Mac:52:54:00:35:ff:fb Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-607439 Clientid:01:52:54:00:35:ff:fb}
I0210 10:31:16.695085  436977 main.go:141] libmachine: (functional-607439) DBG | domain functional-607439 has defined IP address 192.168.39.57 and MAC address 52:54:00:35:ff:fb in network mk-functional-607439
I0210 10:31:16.695442  436977 main.go:141] libmachine: (functional-607439) Calling .GetSSHPort
I0210 10:31:16.695648  436977 main.go:141] libmachine: (functional-607439) Calling .GetSSHKeyPath
I0210 10:31:16.695835  436977 main.go:141] libmachine: (functional-607439) Calling .GetSSHUsername
I0210 10:31:16.695997  436977 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/functional-607439/id_rsa Username:docker}
I0210 10:31:16.781170  436977 build_images.go:161] Building image from path: /tmp/build.1008088072.tar
I0210 10:31:16.781260  436977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 10:31:16.799450  436977 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1008088072.tar
I0210 10:31:16.804735  436977 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1008088072.tar: stat -c "%s %y" /var/lib/minikube/build/build.1008088072.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1008088072.tar': No such file or directory
I0210 10:31:16.804779  436977 ssh_runner.go:362] scp /tmp/build.1008088072.tar --> /var/lib/minikube/build/build.1008088072.tar (3072 bytes)
I0210 10:31:16.851340  436977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1008088072
I0210 10:31:16.873511  436977 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1008088072 -xf /var/lib/minikube/build/build.1008088072.tar
I0210 10:31:16.887481  436977 docker.go:360] Building image: /var/lib/minikube/build/build.1008088072
I0210 10:31:16.887589  436977 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-607439 /var/lib/minikube/build/build.1008088072
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:c6d63f22b2e89a68b32fddf803316b02360d4852041af58503759a69782d034a
#8 writing image sha256:c6d63f22b2e89a68b32fddf803316b02360d4852041af58503759a69782d034a done
#8 naming to localhost/my-image:functional-607439 0.0s done
#8 DONE 0.1s
I0210 10:31:19.789953  436977 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-607439 /var/lib/minikube/build/build.1008088072: (2.902281408s)
I0210 10:31:19.790042  436977 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1008088072
I0210 10:31:19.801613  436977 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1008088072.tar
I0210 10:31:19.813872  436977 build_images.go:217] Built localhost/my-image:functional-607439 from /tmp/build.1008088072.tar
I0210 10:31:19.813913  436977 build_images.go:133] succeeded building to: functional-607439
I0210 10:31:19.813920  436977 build_images.go:134] failed building to: 
I0210 10:31:19.813987  436977 main.go:141] libmachine: Making call to close driver server
I0210 10:31:19.814010  436977 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:19.814317  436977 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:19.814334  436977 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:31:19.814343  436977 main.go:141] libmachine: Making call to close driver server
I0210 10:31:19.814350  436977 main.go:141] libmachine: (functional-607439) Calling .Close
I0210 10:31:19.814636  436977 main.go:141] libmachine: (functional-607439) DBG | Closing plugin on server side
I0210 10:31:19.814722  436977 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:31:19.814771  436977 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/02/10 10:31:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.602381882s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-607439
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image load --daemon kicbase/echo-server:functional-607439 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-607439 image load --daemon kicbase/echo-server:functional-607439 --alsologtostderr: (1.019915653s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls
E0210 10:31:10.134247  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-607439 docker-env) && out/minikube-linux-amd64 status -p functional-607439"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-607439 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image load --daemon kicbase/echo-server:functional-607439 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-607439
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image load --daemon kicbase/echo-server:functional-607439 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdspecific-port140056553/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.675658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:31:11.656384  428547 retry.go:31] will retry after 386.0908ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdspecific-port140056553/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh "sudo umount -f /mount-9p": exit status 1 (222.277651ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-607439 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdspecific-port140056553/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image save kicbase/echo-server:functional-607439 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3498455950/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3498455950/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3498455950/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T" /mount1: exit status 1 (318.250615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:31:13.460598  428547 retry.go:31] will retry after 354.042414ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-607439 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3498455950/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3498455950/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-607439 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3498455950/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image rm kicbase/echo-server:functional-607439 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-607439
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-607439 image save --daemon kicbase/echo-server:functional-607439 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-607439
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-607439
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-607439
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-607439
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (186.88s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-372086 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-372086 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m15.421029458s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-372086 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-372086 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.738636992s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-372086 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-372086 addons enable gvisor: (3.538303346s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [3d774d25-9a57-40b0-a9c4-afc3ecd2ebe9] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003742517s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-372086 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [215bca81-03f4-4f37-88dd-040026f6e451] Pending
helpers_test.go:344: "nginx-gvisor" [215bca81-03f4-4f37-88dd-040026f6e451] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [215bca81-03f4-4f37-88dd-040026f6e451] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 29.004059934s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-372086
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-372086: (6.66307864s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-372086 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-372086 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (31.221488979s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [3d774d25-9a57-40b0-a9c4-afc3ecd2ebe9] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003612724s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [215bca81-03f4-4f37-88dd-040026f6e451] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.003384607s
helpers_test.go:175: Cleaning up "gvisor-372086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-372086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-372086: (1.122474961s)
--- PASS: TestGvisorAddon (186.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (218.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-380806 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0210 10:32:32.057297  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:34:48.197392  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:15.899608  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-380806 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m37.531582582s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (218.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-380806 -- rollout status deployment/busybox: (3.058899461s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-7btpx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-bmq47 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-l8jjr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-7btpx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-bmq47 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-l8jjr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-7btpx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-bmq47 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-l8jjr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-7btpx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-7btpx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-bmq47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-bmq47 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-l8jjr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-380806 -- exec busybox-58667487b6-l8jjr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-380806 -v=7 --alsologtostderr
E0210 10:35:52.898649  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:52.905061  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:52.916477  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:52.937926  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:52.979445  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:53.060947  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:53.222585  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:53.544028  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:54.185692  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:55.467172  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:35:58.029195  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:36:03.151084  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:36:13.392426  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-380806 -v=7 --alsologtostderr: (1m1.908332408s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-380806 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp testdata/cp-test.txt ha-380806:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile489342883/001/cp-test_ha-380806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806:/home/docker/cp-test.txt ha-380806-m02:/home/docker/cp-test_ha-380806_ha-380806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test_ha-380806_ha-380806-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806:/home/docker/cp-test.txt ha-380806-m03:/home/docker/cp-test_ha-380806_ha-380806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test_ha-380806_ha-380806-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806:/home/docker/cp-test.txt ha-380806-m04:/home/docker/cp-test_ha-380806_ha-380806-m04.txt
E0210 10:36:33.874109  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test_ha-380806_ha-380806-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp testdata/cp-test.txt ha-380806-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile489342883/001/cp-test_ha-380806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m02:/home/docker/cp-test.txt ha-380806:/home/docker/cp-test_ha-380806-m02_ha-380806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test_ha-380806-m02_ha-380806.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m02:/home/docker/cp-test.txt ha-380806-m03:/home/docker/cp-test_ha-380806-m02_ha-380806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test_ha-380806-m02_ha-380806-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m02:/home/docker/cp-test.txt ha-380806-m04:/home/docker/cp-test_ha-380806-m02_ha-380806-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test_ha-380806-m02_ha-380806-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp testdata/cp-test.txt ha-380806-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile489342883/001/cp-test_ha-380806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m03:/home/docker/cp-test.txt ha-380806:/home/docker/cp-test_ha-380806-m03_ha-380806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test_ha-380806-m03_ha-380806.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m03:/home/docker/cp-test.txt ha-380806-m02:/home/docker/cp-test_ha-380806-m03_ha-380806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test_ha-380806-m03_ha-380806-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m03:/home/docker/cp-test.txt ha-380806-m04:/home/docker/cp-test_ha-380806-m03_ha-380806-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test_ha-380806-m03_ha-380806-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp testdata/cp-test.txt ha-380806-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile489342883/001/cp-test_ha-380806-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m04:/home/docker/cp-test.txt ha-380806:/home/docker/cp-test_ha-380806-m04_ha-380806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806 "sudo cat /home/docker/cp-test_ha-380806-m04_ha-380806.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m04:/home/docker/cp-test.txt ha-380806-m02:/home/docker/cp-test_ha-380806-m04_ha-380806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m02 "sudo cat /home/docker/cp-test_ha-380806-m04_ha-380806-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 cp ha-380806-m04:/home/docker/cp-test.txt ha-380806-m03:/home/docker/cp-test_ha-380806-m04_ha-380806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 ssh -n ha-380806-m03 "sudo cat /home/docker/cp-test_ha-380806-m04_ha-380806-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-380806 node stop m02 -v=7 --alsologtostderr: (12.652221356s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr: exit status 7 (644.276627ms)

                                                
                                                
-- stdout --
	ha-380806
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-380806-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380806-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-380806-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:36:56.224094  441614 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:36:56.224221  441614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:36:56.224231  441614 out.go:358] Setting ErrFile to fd 2...
	I0210 10:36:56.224235  441614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:36:56.224468  441614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 10:36:56.224686  441614 out.go:352] Setting JSON to false
	I0210 10:36:56.224730  441614 mustload.go:65] Loading cluster: ha-380806
	I0210 10:36:56.224837  441614 notify.go:220] Checking for updates...
	I0210 10:36:56.225294  441614 config.go:182] Loaded profile config "ha-380806": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:36:56.225321  441614 status.go:174] checking status of ha-380806 ...
	I0210 10:36:56.225838  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.225898  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.247017  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0210 10:36:56.247429  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.248049  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.248077  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.248473  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.248659  441614 main.go:141] libmachine: (ha-380806) Calling .GetState
	I0210 10:36:56.250225  441614 status.go:371] ha-380806 host status = "Running" (err=<nil>)
	I0210 10:36:56.250250  441614 host.go:66] Checking if "ha-380806" exists ...
	I0210 10:36:56.250577  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.250626  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.266342  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0210 10:36:56.266729  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.267198  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.267225  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.267509  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.267707  441614 main.go:141] libmachine: (ha-380806) Calling .GetIP
	I0210 10:36:56.271123  441614 main.go:141] libmachine: (ha-380806) DBG | domain ha-380806 has defined MAC address 52:54:00:d9:7f:bc in network mk-ha-380806
	I0210 10:36:56.271551  441614 main.go:141] libmachine: (ha-380806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7f:bc", ip: ""} in network mk-ha-380806: {Iface:virbr1 ExpiryTime:2025-02-10 11:31:56 +0000 UTC Type:0 Mac:52:54:00:d9:7f:bc Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-380806 Clientid:01:52:54:00:d9:7f:bc}
	I0210 10:36:56.271584  441614 main.go:141] libmachine: (ha-380806) DBG | domain ha-380806 has defined IP address 192.168.39.214 and MAC address 52:54:00:d9:7f:bc in network mk-ha-380806
	I0210 10:36:56.271720  441614 host.go:66] Checking if "ha-380806" exists ...
	I0210 10:36:56.272028  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.272096  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.287693  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0210 10:36:56.288165  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.288896  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.288923  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.289349  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.289567  441614 main.go:141] libmachine: (ha-380806) Calling .DriverName
	I0210 10:36:56.289781  441614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:36:56.289816  441614 main.go:141] libmachine: (ha-380806) Calling .GetSSHHostname
	I0210 10:36:56.292671  441614 main.go:141] libmachine: (ha-380806) DBG | domain ha-380806 has defined MAC address 52:54:00:d9:7f:bc in network mk-ha-380806
	I0210 10:36:56.293097  441614 main.go:141] libmachine: (ha-380806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7f:bc", ip: ""} in network mk-ha-380806: {Iface:virbr1 ExpiryTime:2025-02-10 11:31:56 +0000 UTC Type:0 Mac:52:54:00:d9:7f:bc Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-380806 Clientid:01:52:54:00:d9:7f:bc}
	I0210 10:36:56.293165  441614 main.go:141] libmachine: (ha-380806) DBG | domain ha-380806 has defined IP address 192.168.39.214 and MAC address 52:54:00:d9:7f:bc in network mk-ha-380806
	I0210 10:36:56.293218  441614 main.go:141] libmachine: (ha-380806) Calling .GetSSHPort
	I0210 10:36:56.293479  441614 main.go:141] libmachine: (ha-380806) Calling .GetSSHKeyPath
	I0210 10:36:56.293629  441614 main.go:141] libmachine: (ha-380806) Calling .GetSSHUsername
	I0210 10:36:56.293808  441614 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/ha-380806/id_rsa Username:docker}
	I0210 10:36:56.377250  441614 ssh_runner.go:195] Run: systemctl --version
	I0210 10:36:56.383425  441614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:36:56.397389  441614 kubeconfig.go:125] found "ha-380806" server: "https://192.168.39.254:8443"
	I0210 10:36:56.397433  441614 api_server.go:166] Checking apiserver status ...
	I0210 10:36:56.397471  441614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:36:56.412554  441614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1987/cgroup
	W0210 10:36:56.423226  441614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1987/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 10:36:56.423280  441614 ssh_runner.go:195] Run: ls
	I0210 10:36:56.428007  441614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 10:36:56.432879  441614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 10:36:56.432901  441614 status.go:463] ha-380806 apiserver status = Running (err=<nil>)
	I0210 10:36:56.432912  441614 status.go:176] ha-380806 status: &{Name:ha-380806 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:36:56.432937  441614 status.go:174] checking status of ha-380806-m02 ...
	I0210 10:36:56.433279  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.433316  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.449339  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0210 10:36:56.449839  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.450323  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.450344  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.450735  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.450919  441614 main.go:141] libmachine: (ha-380806-m02) Calling .GetState
	I0210 10:36:56.452649  441614 status.go:371] ha-380806-m02 host status = "Stopped" (err=<nil>)
	I0210 10:36:56.452662  441614 status.go:384] host is not running, skipping remaining checks
	I0210 10:36:56.452667  441614 status.go:176] ha-380806-m02 status: &{Name:ha-380806-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:36:56.452689  441614 status.go:174] checking status of ha-380806-m03 ...
	I0210 10:36:56.453014  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.453061  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.470129  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0210 10:36:56.470647  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.471260  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.471290  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.471624  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.471837  441614 main.go:141] libmachine: (ha-380806-m03) Calling .GetState
	I0210 10:36:56.473584  441614 status.go:371] ha-380806-m03 host status = "Running" (err=<nil>)
	I0210 10:36:56.473605  441614 host.go:66] Checking if "ha-380806-m03" exists ...
	I0210 10:36:56.473912  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.473949  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.489077  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0210 10:36:56.489567  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.490043  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.490066  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.490395  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.490609  441614 main.go:141] libmachine: (ha-380806-m03) Calling .GetIP
	I0210 10:36:56.493817  441614 main.go:141] libmachine: (ha-380806-m03) DBG | domain ha-380806-m03 has defined MAC address 52:54:00:49:10:c4 in network mk-ha-380806
	I0210 10:36:56.494322  441614 main.go:141] libmachine: (ha-380806-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:10:c4", ip: ""} in network mk-ha-380806: {Iface:virbr1 ExpiryTime:2025-02-10 11:34:11 +0000 UTC Type:0 Mac:52:54:00:49:10:c4 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:ha-380806-m03 Clientid:01:52:54:00:49:10:c4}
	I0210 10:36:56.494353  441614 main.go:141] libmachine: (ha-380806-m03) DBG | domain ha-380806-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:49:10:c4 in network mk-ha-380806
	I0210 10:36:56.494561  441614 host.go:66] Checking if "ha-380806-m03" exists ...
	I0210 10:36:56.494874  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.494938  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.510366  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0210 10:36:56.510819  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.511373  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.511396  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.511740  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.511988  441614 main.go:141] libmachine: (ha-380806-m03) Calling .DriverName
	I0210 10:36:56.512270  441614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:36:56.512293  441614 main.go:141] libmachine: (ha-380806-m03) Calling .GetSSHHostname
	I0210 10:36:56.515275  441614 main.go:141] libmachine: (ha-380806-m03) DBG | domain ha-380806-m03 has defined MAC address 52:54:00:49:10:c4 in network mk-ha-380806
	I0210 10:36:56.515766  441614 main.go:141] libmachine: (ha-380806-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:10:c4", ip: ""} in network mk-ha-380806: {Iface:virbr1 ExpiryTime:2025-02-10 11:34:11 +0000 UTC Type:0 Mac:52:54:00:49:10:c4 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:ha-380806-m03 Clientid:01:52:54:00:49:10:c4}
	I0210 10:36:56.515797  441614 main.go:141] libmachine: (ha-380806-m03) DBG | domain ha-380806-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:49:10:c4 in network mk-ha-380806
	I0210 10:36:56.515878  441614 main.go:141] libmachine: (ha-380806-m03) Calling .GetSSHPort
	I0210 10:36:56.516076  441614 main.go:141] libmachine: (ha-380806-m03) Calling .GetSSHKeyPath
	I0210 10:36:56.516232  441614 main.go:141] libmachine: (ha-380806-m03) Calling .GetSSHUsername
	I0210 10:36:56.516356  441614 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/ha-380806-m03/id_rsa Username:docker}
	I0210 10:36:56.600641  441614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:36:56.616084  441614 kubeconfig.go:125] found "ha-380806" server: "https://192.168.39.254:8443"
	I0210 10:36:56.616117  441614 api_server.go:166] Checking apiserver status ...
	I0210 10:36:56.616153  441614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:36:56.629466  441614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1834/cgroup
	W0210 10:36:56.640891  441614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1834/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 10:36:56.640998  441614 ssh_runner.go:195] Run: ls
	I0210 10:36:56.645268  441614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 10:36:56.649856  441614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 10:36:56.649880  441614 status.go:463] ha-380806-m03 apiserver status = Running (err=<nil>)
	I0210 10:36:56.649889  441614 status.go:176] ha-380806-m03 status: &{Name:ha-380806-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:36:56.649914  441614 status.go:174] checking status of ha-380806-m04 ...
	I0210 10:36:56.650233  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.650276  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.666674  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0210 10:36:56.667074  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.667564  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.667584  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.667879  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.668073  441614 main.go:141] libmachine: (ha-380806-m04) Calling .GetState
	I0210 10:36:56.669643  441614 status.go:371] ha-380806-m04 host status = "Running" (err=<nil>)
	I0210 10:36:56.669660  441614 host.go:66] Checking if "ha-380806-m04" exists ...
	I0210 10:36:56.669954  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.669989  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.684781  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0210 10:36:56.685210  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.685719  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.685742  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.686102  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.686306  441614 main.go:141] libmachine: (ha-380806-m04) Calling .GetIP
	I0210 10:36:56.689583  441614 main.go:141] libmachine: (ha-380806-m04) DBG | domain ha-380806-m04 has defined MAC address 52:54:00:a1:e4:ee in network mk-ha-380806
	I0210 10:36:56.690066  441614 main.go:141] libmachine: (ha-380806-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:e4:ee", ip: ""} in network mk-ha-380806: {Iface:virbr1 ExpiryTime:2025-02-10 11:35:42 +0000 UTC Type:0 Mac:52:54:00:a1:e4:ee Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-380806-m04 Clientid:01:52:54:00:a1:e4:ee}
	I0210 10:36:56.690094  441614 main.go:141] libmachine: (ha-380806-m04) DBG | domain ha-380806-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:a1:e4:ee in network mk-ha-380806
	I0210 10:36:56.690235  441614 host.go:66] Checking if "ha-380806-m04" exists ...
	I0210 10:36:56.690536  441614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:36:56.690593  441614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:36:56.707354  441614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0210 10:36:56.707877  441614 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:36:56.708497  441614 main.go:141] libmachine: Using API Version  1
	I0210 10:36:56.708511  441614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:36:56.708866  441614 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:36:56.709055  441614 main.go:141] libmachine: (ha-380806-m04) Calling .DriverName
	I0210 10:36:56.709281  441614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:36:56.709304  441614 main.go:141] libmachine: (ha-380806-m04) Calling .GetSSHHostname
	I0210 10:36:56.712405  441614 main.go:141] libmachine: (ha-380806-m04) DBG | domain ha-380806-m04 has defined MAC address 52:54:00:a1:e4:ee in network mk-ha-380806
	I0210 10:36:56.712899  441614 main.go:141] libmachine: (ha-380806-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:e4:ee", ip: ""} in network mk-ha-380806: {Iface:virbr1 ExpiryTime:2025-02-10 11:35:42 +0000 UTC Type:0 Mac:52:54:00:a1:e4:ee Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-380806-m04 Clientid:01:52:54:00:a1:e4:ee}
	I0210 10:36:56.712933  441614 main.go:141] libmachine: (ha-380806-m04) DBG | domain ha-380806-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:a1:e4:ee in network mk-ha-380806
	I0210 10:36:56.713076  441614 main.go:141] libmachine: (ha-380806-m04) Calling .GetSSHPort
	I0210 10:36:56.713261  441614 main.go:141] libmachine: (ha-380806-m04) Calling .GetSSHKeyPath
	I0210 10:36:56.713459  441614 main.go:141] libmachine: (ha-380806-m04) Calling .GetSSHUsername
	I0210 10:36:56.713597  441614 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/ha-380806-m04/id_rsa Username:docker}
	I0210 10:36:56.799811  441614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:36:56.814404  441614 status.go:176] ha-380806-m04 status: &{Name:ha-380806-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 node start m02 -v=7 --alsologtostderr
E0210 10:37:14.836028  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-380806 node start m02 -v=7 --alsologtostderr: (37.941077425s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (224.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-380806 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-380806 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-380806 -v=7 --alsologtostderr: (40.941728162s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-380806 --wait=true -v=7 --alsologtostderr
E0210 10:38:36.760525  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:48.195713  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:52.899096  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:41:20.602869  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-380806 --wait=true -v=7 --alsologtostderr: (3m3.232148907s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-380806
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (224.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-380806 node delete m03 -v=7 --alsologtostderr: (6.493240716s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-380806 stop -v=7 --alsologtostderr: (37.490339432s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr: exit status 7 (106.496042ms)

                                                
                                                
-- stdout --
	ha-380806
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380806-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380806-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:42:06.861801  444049 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:42:06.862154  444049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:42:06.862164  444049 out.go:358] Setting ErrFile to fd 2...
	I0210 10:42:06.862169  444049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:42:06.862377  444049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 10:42:06.862562  444049 out.go:352] Setting JSON to false
	I0210 10:42:06.862595  444049 mustload.go:65] Loading cluster: ha-380806
	I0210 10:42:06.862695  444049 notify.go:220] Checking for updates...
	I0210 10:42:06.863043  444049 config.go:182] Loaded profile config "ha-380806": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:42:06.863069  444049 status.go:174] checking status of ha-380806 ...
	I0210 10:42:06.863513  444049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:42:06.863555  444049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:42:06.879647  444049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43031
	I0210 10:42:06.880121  444049 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:42:06.880809  444049 main.go:141] libmachine: Using API Version  1
	I0210 10:42:06.880833  444049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:42:06.881214  444049 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:42:06.881418  444049 main.go:141] libmachine: (ha-380806) Calling .GetState
	I0210 10:42:06.883015  444049 status.go:371] ha-380806 host status = "Stopped" (err=<nil>)
	I0210 10:42:06.883028  444049 status.go:384] host is not running, skipping remaining checks
	I0210 10:42:06.883033  444049 status.go:176] ha-380806 status: &{Name:ha-380806 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:42:06.883062  444049 status.go:174] checking status of ha-380806-m02 ...
	I0210 10:42:06.883364  444049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:42:06.883409  444049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:42:06.898222  444049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0210 10:42:06.898568  444049 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:42:06.898954  444049 main.go:141] libmachine: Using API Version  1
	I0210 10:42:06.898973  444049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:42:06.899294  444049 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:42:06.899477  444049 main.go:141] libmachine: (ha-380806-m02) Calling .GetState
	I0210 10:42:06.900902  444049 status.go:371] ha-380806-m02 host status = "Stopped" (err=<nil>)
	I0210 10:42:06.900917  444049 status.go:384] host is not running, skipping remaining checks
	I0210 10:42:06.900925  444049 status.go:176] ha-380806-m02 status: &{Name:ha-380806-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:42:06.900946  444049 status.go:174] checking status of ha-380806-m04 ...
	I0210 10:42:06.901270  444049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:42:06.901335  444049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:42:06.915709  444049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0210 10:42:06.916096  444049 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:42:06.916499  444049 main.go:141] libmachine: Using API Version  1
	I0210 10:42:06.916519  444049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:42:06.916867  444049 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:42:06.917030  444049 main.go:141] libmachine: (ha-380806-m04) Calling .GetState
	I0210 10:42:06.918542  444049 status.go:371] ha-380806-m04 host status = "Stopped" (err=<nil>)
	I0210 10:42:06.918555  444049 status.go:384] host is not running, skipping remaining checks
	I0210 10:42:06.918560  444049 status.go:176] ha-380806-m04 status: &{Name:ha-380806-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (157.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-380806 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-380806 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m36.399749658s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (157.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-380806 --control-plane -v=7 --alsologtostderr
E0210 10:44:48.196316  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:45:52.898428  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-380806 --control-plane -v=7 --alsologtostderr: (1m22.608982383s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-380806 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-430472 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-430472 --driver=kvm2 : (52.041108163s)
--- PASS: TestImageBuild/serial/Setup (52.04s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-430472
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-430472: (1.359975563s)
--- PASS: TestImageBuild/serial/NormalBuild (1.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-430472
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-430472
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-430472
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-281459 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-281459 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m29.505788334s)
--- PASS: TestJSONOutput/start/Command (89.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-281459 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-281459 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-281459 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-281459 --output=json --user=testUser: (12.592507099s)
--- PASS: TestJSONOutput/stop/Command (12.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-927098 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-927098 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.903266ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5d5ace0f-b517-4a7d-be1f-e0036c25b92d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-927098] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91d411bc-47ad-4c36-9519-8fac83e51a02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20385"}}
	{"specversion":"1.0","id":"729da378-0dd1-4695-8f6c-a1e9276ba36f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"222df14e-7fe8-4bf5-b1f9-93e9b2bc4445","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig"}}
	{"specversion":"1.0","id":"54026ec0-4503-4a6f-bc89-7ea453bf31f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube"}}
	{"specversion":"1.0","id":"627be30b-90e2-4178-975c-758a033c7b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e4305150-4054-4a86-b4fa-c3a4b3614e99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5404a464-3cb6-491d-b3af-3540c12f8e29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-927098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-927098
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (98.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-913991 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-913991 --driver=kvm2 : (46.409004331s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-929244 --driver=kvm2 
E0210 10:49:48.199995  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-929244 --driver=kvm2 : (48.846179488s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-913991
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-929244
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-929244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-929244
helpers_test.go:175: Cleaning up "first-913991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-913991
--- PASS: TestMinikubeProfile (98.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-014630 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0210 10:50:52.902165  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-014630 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.423522337s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-014630 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-014630 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-039410 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-039410 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.302290222s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039410 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039410 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-014630 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039410 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039410 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-039410
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-039410: (2.281661101s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-039410
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-039410: (25.87748745s)
--- PASS: TestMountStart/serial/RestartStopped (26.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039410 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039410 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (144.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-742240 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0210 10:52:15.964992  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-742240 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m23.983648653s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (144.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-742240 -- rollout status deployment/busybox: (2.736492417s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-894v2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-hfv6r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-894v2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-hfv6r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-894v2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-hfv6r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-894v2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-894v2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-hfv6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-742240 -- exec busybox-58667487b6-hfv6r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (61.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-742240 -v 3 --alsologtostderr
E0210 10:54:48.196943  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-742240 -v 3 --alsologtostderr: (1m1.05615899s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (61.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-742240 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp testdata/cp-test.txt multinode-742240:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1464288667/001/cp-test_multinode-742240.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240:/home/docker/cp-test.txt multinode-742240-m02:/home/docker/cp-test_multinode-742240_multinode-742240-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m02 "sudo cat /home/docker/cp-test_multinode-742240_multinode-742240-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240:/home/docker/cp-test.txt multinode-742240-m03:/home/docker/cp-test_multinode-742240_multinode-742240-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m03 "sudo cat /home/docker/cp-test_multinode-742240_multinode-742240-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp testdata/cp-test.txt multinode-742240-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1464288667/001/cp-test_multinode-742240-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240-m02:/home/docker/cp-test.txt multinode-742240:/home/docker/cp-test_multinode-742240-m02_multinode-742240.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240 "sudo cat /home/docker/cp-test_multinode-742240-m02_multinode-742240.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240-m02:/home/docker/cp-test.txt multinode-742240-m03:/home/docker/cp-test_multinode-742240-m02_multinode-742240-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m03 "sudo cat /home/docker/cp-test_multinode-742240-m02_multinode-742240-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp testdata/cp-test.txt multinode-742240-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1464288667/001/cp-test_multinode-742240-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240-m03:/home/docker/cp-test.txt multinode-742240:/home/docker/cp-test_multinode-742240-m03_multinode-742240.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240 "sudo cat /home/docker/cp-test_multinode-742240-m03_multinode-742240.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 cp multinode-742240-m03:/home/docker/cp-test.txt multinode-742240-m02:/home/docker/cp-test_multinode-742240-m03_multinode-742240-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 ssh -n multinode-742240-m02 "sudo cat /home/docker/cp-test_multinode-742240-m03_multinode-742240-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-742240 node stop m03: (2.540026582s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-742240 status: exit status 7 (463.185988ms)

                                                
                                                
-- stdout --
	multinode-742240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-742240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-742240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr: exit status 7 (426.399749ms)

                                                
                                                
-- stdout --
	multinode-742240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-742240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-742240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:55:46.587149  452768 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:55:46.587274  452768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:55:46.587285  452768 out.go:358] Setting ErrFile to fd 2...
	I0210 10:55:46.587291  452768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:55:46.587476  452768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 10:55:46.587668  452768 out.go:352] Setting JSON to false
	I0210 10:55:46.587705  452768 mustload.go:65] Loading cluster: multinode-742240
	I0210 10:55:46.587822  452768 notify.go:220] Checking for updates...
	I0210 10:55:46.588143  452768 config.go:182] Loaded profile config "multinode-742240": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:55:46.588170  452768 status.go:174] checking status of multinode-742240 ...
	I0210 10:55:46.588649  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.588712  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.605161  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0210 10:55:46.605569  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.606183  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.606216  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.606567  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.606783  452768 main.go:141] libmachine: (multinode-742240) Calling .GetState
	I0210 10:55:46.608486  452768 status.go:371] multinode-742240 host status = "Running" (err=<nil>)
	I0210 10:55:46.608506  452768 host.go:66] Checking if "multinode-742240" exists ...
	I0210 10:55:46.608844  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.608894  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.625081  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0210 10:55:46.625507  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.625952  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.625972  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.626362  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.626544  452768 main.go:141] libmachine: (multinode-742240) Calling .GetIP
	I0210 10:55:46.629139  452768 main.go:141] libmachine: (multinode-742240) DBG | domain multinode-742240 has defined MAC address 52:54:00:7a:08:12 in network mk-multinode-742240
	I0210 10:55:46.629590  452768 main.go:141] libmachine: (multinode-742240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:08:12", ip: ""} in network mk-multinode-742240: {Iface:virbr1 ExpiryTime:2025-02-10 11:52:19 +0000 UTC Type:0 Mac:52:54:00:7a:08:12 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-742240 Clientid:01:52:54:00:7a:08:12}
	I0210 10:55:46.629626  452768 main.go:141] libmachine: (multinode-742240) DBG | domain multinode-742240 has defined IP address 192.168.39.156 and MAC address 52:54:00:7a:08:12 in network mk-multinode-742240
	I0210 10:55:46.629765  452768 host.go:66] Checking if "multinode-742240" exists ...
	I0210 10:55:46.630085  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.630136  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.646103  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0210 10:55:46.646504  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.646930  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.646950  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.647301  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.647508  452768 main.go:141] libmachine: (multinode-742240) Calling .DriverName
	I0210 10:55:46.647708  452768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:55:46.647732  452768 main.go:141] libmachine: (multinode-742240) Calling .GetSSHHostname
	I0210 10:55:46.650427  452768 main.go:141] libmachine: (multinode-742240) DBG | domain multinode-742240 has defined MAC address 52:54:00:7a:08:12 in network mk-multinode-742240
	I0210 10:55:46.650840  452768 main.go:141] libmachine: (multinode-742240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:08:12", ip: ""} in network mk-multinode-742240: {Iface:virbr1 ExpiryTime:2025-02-10 11:52:19 +0000 UTC Type:0 Mac:52:54:00:7a:08:12 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-742240 Clientid:01:52:54:00:7a:08:12}
	I0210 10:55:46.650877  452768 main.go:141] libmachine: (multinode-742240) DBG | domain multinode-742240 has defined IP address 192.168.39.156 and MAC address 52:54:00:7a:08:12 in network mk-multinode-742240
	I0210 10:55:46.651033  452768 main.go:141] libmachine: (multinode-742240) Calling .GetSSHPort
	I0210 10:55:46.651229  452768 main.go:141] libmachine: (multinode-742240) Calling .GetSSHKeyPath
	I0210 10:55:46.651372  452768 main.go:141] libmachine: (multinode-742240) Calling .GetSSHUsername
	I0210 10:55:46.651511  452768 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/multinode-742240/id_rsa Username:docker}
	I0210 10:55:46.732619  452768 ssh_runner.go:195] Run: systemctl --version
	I0210 10:55:46.738768  452768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:55:46.753079  452768 kubeconfig.go:125] found "multinode-742240" server: "https://192.168.39.156:8443"
	I0210 10:55:46.753154  452768 api_server.go:166] Checking apiserver status ...
	I0210 10:55:46.753189  452768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:55:46.765890  452768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup
	W0210 10:55:46.775266  452768 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 10:55:46.775331  452768 ssh_runner.go:195] Run: ls
	I0210 10:55:46.779867  452768 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0210 10:55:46.784567  452768 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0210 10:55:46.784595  452768 status.go:463] multinode-742240 apiserver status = Running (err=<nil>)
	I0210 10:55:46.784605  452768 status.go:176] multinode-742240 status: &{Name:multinode-742240 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:55:46.784623  452768 status.go:174] checking status of multinode-742240-m02 ...
	I0210 10:55:46.784927  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.784962  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.801077  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0210 10:55:46.801654  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.802267  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.802292  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.802683  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.802910  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .GetState
	I0210 10:55:46.804656  452768 status.go:371] multinode-742240-m02 host status = "Running" (err=<nil>)
	I0210 10:55:46.804675  452768 host.go:66] Checking if "multinode-742240-m02" exists ...
	I0210 10:55:46.805082  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.805151  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.821425  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0210 10:55:46.821964  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.822554  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.822574  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.822942  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.823145  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .GetIP
	I0210 10:55:46.825998  452768 main.go:141] libmachine: (multinode-742240-m02) DBG | domain multinode-742240-m02 has defined MAC address 52:54:00:b2:05:d9 in network mk-multinode-742240
	I0210 10:55:46.826449  452768 main.go:141] libmachine: (multinode-742240-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:05:d9", ip: ""} in network mk-multinode-742240: {Iface:virbr1 ExpiryTime:2025-02-10 11:53:44 +0000 UTC Type:0 Mac:52:54:00:b2:05:d9 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-742240-m02 Clientid:01:52:54:00:b2:05:d9}
	I0210 10:55:46.826479  452768 main.go:141] libmachine: (multinode-742240-m02) DBG | domain multinode-742240-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:b2:05:d9 in network mk-multinode-742240
	I0210 10:55:46.826610  452768 host.go:66] Checking if "multinode-742240-m02" exists ...
	I0210 10:55:46.827016  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.827081  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.842406  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0210 10:55:46.842779  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.843208  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.843225  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.843550  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.843751  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .DriverName
	I0210 10:55:46.843976  452768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:55:46.844012  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .GetSSHHostname
	I0210 10:55:46.846632  452768 main.go:141] libmachine: (multinode-742240-m02) DBG | domain multinode-742240-m02 has defined MAC address 52:54:00:b2:05:d9 in network mk-multinode-742240
	I0210 10:55:46.847019  452768 main.go:141] libmachine: (multinode-742240-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:05:d9", ip: ""} in network mk-multinode-742240: {Iface:virbr1 ExpiryTime:2025-02-10 11:53:44 +0000 UTC Type:0 Mac:52:54:00:b2:05:d9 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-742240-m02 Clientid:01:52:54:00:b2:05:d9}
	I0210 10:55:46.847054  452768 main.go:141] libmachine: (multinode-742240-m02) DBG | domain multinode-742240-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:b2:05:d9 in network mk-multinode-742240
	I0210 10:55:46.847229  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .GetSSHPort
	I0210 10:55:46.847403  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .GetSSHKeyPath
	I0210 10:55:46.847556  452768 main.go:141] libmachine: (multinode-742240-m02) Calling .GetSSHUsername
	I0210 10:55:46.847680  452768 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-421267/.minikube/machines/multinode-742240-m02/id_rsa Username:docker}
	I0210 10:55:46.931844  452768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:55:46.944529  452768 status.go:176] multinode-742240-m02 status: &{Name:multinode-742240-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:55:46.944575  452768 status.go:174] checking status of multinode-742240-m03 ...
	I0210 10:55:46.944937  452768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 10:55:46.944988  452768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:55:46.961055  452768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0210 10:55:46.961572  452768 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:55:46.962093  452768 main.go:141] libmachine: Using API Version  1
	I0210 10:55:46.962146  452768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:55:46.962519  452768 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:55:46.962721  452768 main.go:141] libmachine: (multinode-742240-m03) Calling .GetState
	I0210 10:55:46.964388  452768 status.go:371] multinode-742240-m03 host status = "Stopped" (err=<nil>)
	I0210 10:55:46.964408  452768 status.go:384] host is not running, skipping remaining checks
	I0210 10:55:46.964416  452768 status.go:176] multinode-742240-m03 status: &{Name:multinode-742240-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 node start m03 -v=7 --alsologtostderr
E0210 10:55:52.901347  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-742240 node start m03 -v=7 --alsologtostderr: (41.624109393s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (227.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-742240
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-742240
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-742240: (28.079898028s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-742240 --wait=true -v=8 --alsologtostderr
E0210 10:59:48.196371  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-742240 --wait=true -v=8 --alsologtostderr: (3m18.984019825s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-742240
--- PASS: TestMultiNode/serial/RestartKeepsNodes (227.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-742240 node delete m03: (1.790300066s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-742240 stop: (24.870053547s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-742240 status: exit status 7 (86.720023ms)

                                                
                                                
-- stdout --
	multinode-742240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-742240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr: exit status 7 (86.633569ms)

                                                
                                                
-- stdout --
	multinode-742240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-742240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:00:43.717042  454725 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:00:43.717183  454725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:00:43.717195  454725 out.go:358] Setting ErrFile to fd 2...
	I0210 11:00:43.717202  454725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:00:43.717369  454725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-421267/.minikube/bin
	I0210 11:00:43.717568  454725 out.go:352] Setting JSON to false
	I0210 11:00:43.717610  454725 mustload.go:65] Loading cluster: multinode-742240
	I0210 11:00:43.717698  454725 notify.go:220] Checking for updates...
	I0210 11:00:43.718058  454725 config.go:182] Loaded profile config "multinode-742240": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:00:43.718084  454725 status.go:174] checking status of multinode-742240 ...
	I0210 11:00:43.718553  454725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 11:00:43.718612  454725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:00:43.734608  454725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0210 11:00:43.735140  454725 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:00:43.735752  454725 main.go:141] libmachine: Using API Version  1
	I0210 11:00:43.735789  454725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:00:43.736195  454725 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:00:43.736395  454725 main.go:141] libmachine: (multinode-742240) Calling .GetState
	I0210 11:00:43.738276  454725 status.go:371] multinode-742240 host status = "Stopped" (err=<nil>)
	I0210 11:00:43.738295  454725 status.go:384] host is not running, skipping remaining checks
	I0210 11:00:43.738302  454725 status.go:176] multinode-742240 status: &{Name:multinode-742240 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:00:43.738337  454725 status.go:174] checking status of multinode-742240-m02 ...
	I0210 11:00:43.738667  454725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0210 11:00:43.738705  454725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:00:43.753294  454725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0210 11:00:43.753674  454725 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:00:43.754121  454725 main.go:141] libmachine: Using API Version  1
	I0210 11:00:43.754146  454725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:00:43.754464  454725 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:00:43.754641  454725 main.go:141] libmachine: (multinode-742240-m02) Calling .GetState
	I0210 11:00:43.756101  454725 status.go:371] multinode-742240-m02 host status = "Stopped" (err=<nil>)
	I0210 11:00:43.756114  454725 status.go:384] host is not running, skipping remaining checks
	I0210 11:00:43.756121  454725 status.go:176] multinode-742240-m02 status: &{Name:multinode-742240-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (100.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-742240 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0210 11:00:52.898592  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-742240 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m39.99352882s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-742240 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (100.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-742240
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-742240-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-742240-m02 --driver=kvm2 : exit status 14 (66.232139ms)

                                                
                                                
-- stdout --
	* [multinode-742240-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-742240-m02' is duplicated with machine name 'multinode-742240-m02' in profile 'multinode-742240'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-742240-m03 --driver=kvm2 
E0210 11:02:51.265389  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-742240-m03 --driver=kvm2 : (50.066798484s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-742240
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-742240: exit status 80 (218.755376ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-742240 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-742240-m03 already exists in multinode-742240-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-742240-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.18s)

                                                
                                    
x
+
TestPreload (151.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-833434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-833434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m23.037094241s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-833434 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-833434 image pull gcr.io/k8s-minikube/busybox: (1.643731181s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-833434
E0210 11:04:48.200046  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-833434: (12.605599182s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-833434 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-833434 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (52.661045274s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-833434 image list
helpers_test.go:175: Cleaning up "test-preload-833434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-833434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-833434: (1.106299748s)
--- PASS: TestPreload (151.26s)

                                                
                                    
x
+
TestScheduledStopUnix (120.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-057726 --memory=2048 --driver=kvm2 
E0210 11:05:52.902097  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-057726 --memory=2048 --driver=kvm2 : (49.038110106s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-057726 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-057726 -n scheduled-stop-057726
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-057726 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0210 11:06:37.717786  428547 retry.go:31] will retry after 88.64µs: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.718916  428547 retry.go:31] will retry after 190.972µs: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.720068  428547 retry.go:31] will retry after 120.673µs: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.721148  428547 retry.go:31] will retry after 356.835µs: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.722296  428547 retry.go:31] will retry after 260.668µs: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.723400  428547 retry.go:31] will retry after 1.067174ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.724572  428547 retry.go:31] will retry after 765.19µs: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.725735  428547 retry.go:31] will retry after 1.366247ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.727947  428547 retry.go:31] will retry after 3.223813ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.732181  428547 retry.go:31] will retry after 2.721865ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.735412  428547 retry.go:31] will retry after 7.636939ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.743634  428547 retry.go:31] will retry after 5.53418ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.749882  428547 retry.go:31] will retry after 14.479074ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.765167  428547 retry.go:31] will retry after 25.798612ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
I0210 11:06:37.791444  428547 retry.go:31] will retry after 18.288275ms: open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/scheduled-stop-057726/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-057726 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-057726 -n scheduled-stop-057726
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-057726
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-057726 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-057726
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-057726: exit status 7 (68.888566ms)

                                                
                                                
-- stdout --
	scheduled-stop-057726
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-057726 -n scheduled-stop-057726
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-057726 -n scheduled-stop-057726: exit status 7 (67.157545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-057726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-057726
--- PASS: TestScheduledStopUnix (120.70s)

                                                
                                    
x
+
TestSkaffold (123.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3078376405 version
skaffold_test.go:63: skaffold version: v2.14.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-613274 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-613274 --memory=2600 --driver=kvm2 : (46.14884301s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3078376405 run --minikube-profile skaffold-613274 --kube-context skaffold-613274 --status-check=true --port-forward=false --interactive=false
E0210 11:08:55.966772  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3078376405 run --minikube-profile skaffold-613274 --kube-context skaffold-613274 --status-check=true --port-forward=false --interactive=false: (1m4.861752704s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-567fbff45-j6bvz" [40427bee-46c4-4fef-87fe-9f9fcac8bbde] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003695621s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-77959f94c5-h85rz" [90ad46d8-4376-4b35-8416-be88fbd4b5e0] Running
E0210 11:09:48.195566  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002950227s
helpers_test.go:175: Cleaning up "skaffold-613274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-613274
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-613274: (1.033458252s)
--- PASS: TestSkaffold (123.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1170838966 start -p running-upgrade-800739 --memory=2200 --vm-driver=kvm2 
I0210 11:09:55.095566  428547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 11:09:57.093710  428547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0210 11:09:57.123599  428547 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0210 11:09:57.123635  428547 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0210 11:09:57.123694  428547 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 11:09:57.123719  428547 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3154909640/002/docker-machine-driver-kvm2
I0210 11:09:57.176354  428547 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3154909640/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000130358 gz:0xc000130450 tar:0xc000130400 tar.bz2:0xc000130410 tar.gz:0xc000130420 tar.xz:0xc000130430 tar.zst:0xc000130440 tbz2:0xc000130410 tgz:0xc000130420 txz:0xc000130430 tzst:0xc000130440 xz:0xc000130458 zip:0xc0001304a0 zst:0xc0001304d0] Getters:map[file:0xc00057fb90 http:0xc001951900 https:0xc001951950] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 11:09:57.176400  428547 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3154909640/002/docker-machine-driver-kvm2
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1170838966 start -p running-upgrade-800739 --memory=2200 --vm-driver=kvm2 : (2m12.832322105s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-800739 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-800739 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m31.414410652s)
helpers_test.go:175: Cleaning up "running-upgrade-800739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-800739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-800739: (1.249800699s)
--- PASS: TestRunningBinaryUpgrade (226.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (238.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
I0210 11:09:52.924188  428547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 11:09:52.924343  428547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0210 11:09:52.962262  428547 install.go:62] docker-machine-driver-kvm2: exit status 1
W0210 11:09:52.962600  428547 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 11:09:52.962664  428547 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3154909640/001/docker-machine-driver-kvm2
I0210 11:09:53.183624  428547 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3154909640/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000130358 gz:0xc000130450 tar:0xc000130400 tar.bz2:0xc000130410 tar.gz:0xc000130420 tar.xz:0xc000130430 tar.zst:0xc000130440 tbz2:0xc000130410 tgz:0xc000130420 txz:0xc000130430 tzst:0xc000130440 xz:0xc000130458 zip:0xc0001304a0 zst:0xc0001304d0] Getters:map[file:0xc001d3af60 http:0xc001a18af0 https:0xc001a18b40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 11:09:53.183676  428547 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3154909640/001/docker-machine-driver-kvm2
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m23.075259966s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-955127
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-955127: (12.506160095s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-955127 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-955127 status --format={{.Host}}: exit status 7 (72.176968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 : (1m20.093070263s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-955127 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (89.928052ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-955127] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-955127
	    minikube start -p kubernetes-upgrade-955127 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9551272 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-955127 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-955127 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 : (1m1.691740164s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-955127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-955127
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-955127: (1.198860619s)
--- PASS: TestKubernetesUpgrade (238.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-949742 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-949742 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (86.387982ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-949742] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-421267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-421267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (65.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-949742 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-949742 --driver=kvm2 : (1m5.534280657s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-949742 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (65.80s)

                                                
                                    
x
+
TestPause/serial/Start (90.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-962179 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-962179 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m30.726158571s)
--- PASS: TestPause/serial/Start (90.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4226286484 start -p stopped-upgrade-267291 --memory=2200 --vm-driver=kvm2 
E0210 11:14:40.868206  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:40.874606  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:40.885958  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:40.907377  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:40.948808  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:41.030317  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:41.191910  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:41.513600  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:42.155674  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:43.437513  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4226286484 start -p stopped-upgrade-267291 --memory=2200 --vm-driver=kvm2 : (1m31.492277005s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4226286484 -p stopped-upgrade-267291 stop
E0210 11:15:52.898525  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4226286484 -p stopped-upgrade-267291 stop: (12.439136322s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-267291 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-267291 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (42.970559681s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-949742 --no-kubernetes --driver=kvm2 
E0210 11:14:45.998916  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:48.195785  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:51.120679  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:15:01.362171  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-949742 --no-kubernetes --driver=kvm2 : (35.816300796s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-949742 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-949742 status -o json: exit status 2 (229.765069ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-949742","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-949742
E0210 11:15:21.843667  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-949742: (1.027054126s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-949742 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-949742 --no-kubernetes --driver=kvm2 : (34.882603379s)
--- PASS: TestNoKubernetes/serial/Start (34.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (73.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-962179 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-962179 --alsologtostderr -v=1 --driver=kvm2 : (1m13.461242007s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (73.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-949742 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-949742 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.315554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.745925329s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0210 11:16:02.805668  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.306159692s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-949742
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-949742: (2.304743406s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-949742 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-949742 --driver=kvm2 : (28.64626482s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.65s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-962179 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-962179 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-962179 --output=json --layout=cluster: exit status 2 (246.296827ms)

                                                
                                                
-- stdout --
	{"Name":"pause-962179","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-962179","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-962179 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-962179 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-962179 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-962179 --alsologtostderr -v=5: (1.046581492s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.27176606s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-949742 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-949742 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.435695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-267291
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-267291: (1.106312059s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-617698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-617698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m52.610784247s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (128.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-222212 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-222212 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1: (2m8.373333696s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (128.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (120.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-066111 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1
E0210 11:17:24.727212  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-066111 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1: (2m0.88258078s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (120.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-222212 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b366b4ad-0d19-425d-9bc9-718dae707bcf] Pending
helpers_test.go:344: "busybox" [b366b4ad-0d19-425d-9bc9-718dae707bcf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b366b4ad-0d19-425d-9bc9-718dae707bcf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004121733s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-222212 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-222212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-222212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050019821s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-222212 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-222212 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-222212 --alsologtostderr -v=3: (13.36232644s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-066111 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ae7162ee-86ac-400a-923f-0c561b3802fa] Pending
helpers_test.go:344: "busybox" [ae7162ee-86ac-400a-923f-0c561b3802fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ae7162ee-86ac-400a-923f-0c561b3802fa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003555054s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-066111 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222212 -n no-preload-222212
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222212 -n no-preload-222212: exit status 7 (76.78168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-222212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-222212 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-222212 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1: (5m2.170844194s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222212 -n no-preload-222212
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-066111 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-066111 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-066111 --alsologtostderr -v=3
E0210 11:19:31.266910  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:19:40.866725  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-066111 --alsologtostderr -v=3: (13.33794354s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-066111 -n embed-certs-066111
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-066111 -n embed-certs-066111: exit status 7 (91.819959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-066111 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (310.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-066111 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-066111 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1: (5m9.842026543s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-066111 -n embed-certs-066111
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (310.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-617698 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8bd06f21-35cb-48cb-8dc1-25dbaad38ae9] Pending
helpers_test.go:344: "busybox" [8bd06f21-35cb-48cb-8dc1-25dbaad38ae9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8bd06f21-35cb-48cb-8dc1-25dbaad38ae9] Running
E0210 11:19:48.196000  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003748825s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-617698 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-617698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-617698 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-617698 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-617698 --alsologtostderr -v=3: (13.482907299s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-732540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-732540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1: (1m49.979788406s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-617698 -n old-k8s-version-617698
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-617698 -n old-k8s-version-617698: exit status 7 (67.760207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-617698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (564.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-617698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0210 11:20:08.569493  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:52.898302  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-617698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (9m24.561415681s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-617698 -n old-k8s-version-617698
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (564.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-732540 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d3986426-e305-4c36-a084-85dbf0db6025] Pending
helpers_test.go:344: "busybox" [d3986426-e305-4c36-a084-85dbf0db6025] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d3986426-e305-4c36-a084-85dbf0db6025] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003983299s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-732540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-732540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-732540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-732540 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-732540 --alsologtostderr -v=3: (13.30463182s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540: exit status 7 (77.523941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-732540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-732540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1
E0210 11:23:31.511051  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:31.517491  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:31.528885  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:31.550348  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:31.591741  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:31.673242  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:31.834868  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:32.156769  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:32.798843  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:34.080815  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:36.642328  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:41.764571  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:52.006288  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:24:12.488285  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-732540 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1: (4m58.66768399s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zc5f4" [3dfc6e56-5ccb-4399-9bd9-065c2e27b786] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003999773s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zc5f4" [3dfc6e56-5ccb-4399-9bd9-065c2e27b786] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003672432s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-222212 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-222212 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-222212 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222212 -n no-preload-222212
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222212 -n no-preload-222212: exit status 2 (242.216938ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-222212 -n no-preload-222212
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-222212 -n no-preload-222212: exit status 2 (249.200624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-222212 --alsologtostderr -v=1
E0210 11:24:40.866597  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222212 -n no-preload-222212
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-222212 -n no-preload-222212
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-337795 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1
E0210 11:24:48.196225  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:24:53.449610  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-337795 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1: (1m6.954847528s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-kp447" [812d5b6b-af8a-4e6a-acf7-6afd26468b81] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00337423s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-kp447" [812d5b6b-af8a-4e6a-acf7-6afd26468b81] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003688023s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-066111 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-066111 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-066111 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-066111 -n embed-certs-066111
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-066111 -n embed-certs-066111: exit status 2 (242.183385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-066111 -n embed-certs-066111
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-066111 -n embed-certs-066111: exit status 2 (246.503572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-066111 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-066111 -n embed-certs-066111
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-066111 -n embed-certs-066111
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0210 11:25:35.968760  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m4.576139121s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-337795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-337795 --alsologtostderr -v=3
E0210 11:25:52.898189  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-337795 --alsologtostderr -v=3: (8.331797246s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-337795 -n newest-cni-337795
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-337795 -n newest-cni-337795: exit status 7 (78.453443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-337795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-337795 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-337795 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1: (38.919164728s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-337795 -n newest-cni-337795
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-632332 "pgrep -a kubelet"
I0210 11:26:13.756834  428547 config.go:182] Loaded profile config "auto-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6n9t2" [a76cc7a9-6ea1-4760-a1b9-ea88b0ecfc06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0210 11:26:15.371493  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-6n9t2" [a76cc7a9-6ea1-4760-a1b9-ea88b0ecfc06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004426315s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-337795 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-337795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-337795 -n newest-cni-337795
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-337795 -n newest-cni-337795: exit status 2 (276.610174ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-337795 -n newest-cni-337795
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-337795 -n newest-cni-337795: exit status 2 (262.813916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-337795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-337795 -n newest-cni-337795
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-337795 -n newest-cni-337795
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (124.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m4.058668631s)
--- PASS: TestNetworkPlugins/group/calico/Start (124.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7nfz4" [3b2a5e88-540c-4805-badf-94cf305c42f0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004201605s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7nfz4" [3b2a5e88-540c-4805-badf-94cf305c42f0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007596677s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-732540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-732540 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-732540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540: exit status 2 (270.923813ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540: exit status 2 (278.096091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-732540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-732540 -n default-k8s-diff-port-732540
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m25.876152423s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7vk6q" [8a0d20f0-6fb6-4eaa-b3b4-1767ba1803d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00462302s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-632332 "pgrep -a kubelet"
I0210 11:28:50.213666  428547 config.go:182] Loaded profile config "custom-flannel-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t7gwc" [4c6bb231-879a-4b12-a999-2c1c2cc286de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t7gwc" [4c6bb231-879a-4b12-a999-2c1c2cc286de] Running
E0210 11:28:59.213213  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/gvisor-372086/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004387021s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-632332 "pgrep -a kubelet"
I0210 11:28:54.346082  428547 config.go:182] Loaded profile config "calico-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gwdg5" [db50c4cd-4442-4309-a6ad-c3dcd94f12da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gwdg5" [db50c4cd-4442-4309-a6ad-c3dcd94f12da] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003671698s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (64.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m4.91546501s)
--- PASS: TestNetworkPlugins/group/false/Start (64.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0210 11:29:22.486371  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/no-preload-222212/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m28.017550805s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (114.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m54.676905117s)
--- PASS: TestNetworkPlugins/group/flannel/Start (114.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-prls5" [b96f571e-58c1-4b62-88ef-9e235b0c541c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004584629s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-prls5" [b96f571e-58c1-4b62-88ef-9e235b0c541c] Running
E0210 11:29:40.866352  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/skaffold-613274/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:29:42.968047  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/no-preload-222212/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00428407s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-617698 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-617698 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-617698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-617698 -n old-k8s-version-617698
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-617698 -n old-k8s-version-617698: exit status 2 (241.409254ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-617698 -n old-k8s-version-617698
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-617698 -n old-k8s-version-617698: exit status 2 (254.404495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-617698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-617698 -n old-k8s-version-617698
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-617698 -n old-k8s-version-617698
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (114.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0210 11:29:48.195499  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/addons-830295/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m54.019444758s)
--- PASS: TestNetworkPlugins/group/bridge/Start (114.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-632332 "pgrep -a kubelet"
I0210 11:30:22.565838  428547 config.go:182] Loaded profile config "false-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kkzx5" [e31e80bf-faac-48dc-aacd-2d6bf23c1a87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0210 11:30:23.929633  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/no-preload-222212/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-kkzx5" [e31e80bf-faac-48dc-aacd-2d6bf23c1a87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003177836s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-632332 "pgrep -a kubelet"
I0210 11:30:47.111355  428547 config.go:182] Loaded profile config "enable-default-cni-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xl2kj" [6d1ec7dd-c791-4698-ac0d-7f7a5d83e023] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xl2kj" [6d1ec7dd-c791-4698-ac0d-7f7a5d83e023] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004035311s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (75.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0210 11:30:52.898356  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/functional-607439/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-632332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m15.730121171s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (75.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vc78c" [51ce07a5-bdfa-4903-a059-55a85584988b] Running
E0210 11:31:19.129392  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/auto-632332/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:24.251365  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/auto-632332/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00518737s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-632332 "pgrep -a kubelet"
I0210 11:31:25.145965  428547 config.go:182] Loaded profile config "flannel-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-28fzp" [90035796-2ec3-47c8-9777-ae829c154c69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-28fzp" [90035796-2ec3-47c8-9777-ae829c154c69] Running
E0210 11:31:34.493480  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/auto-632332/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004378221s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-632332 "pgrep -a kubelet"
I0210 11:31:41.997047  428547 config.go:182] Loaded profile config "bridge-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wmlmb" [3feea290-b9b1-42e2-84c4-f6ef0e034f58] Pending
helpers_test.go:344: "netcat-5d86dc444-wmlmb" [3feea290-b9b1-42e2-84c4-f6ef0e034f58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0210 11:31:45.851560  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/no-preload-222212/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:46.972256  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:46.978787  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:46.990247  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:47.012290  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:47.054458  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:47.136369  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:47.298107  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:47.620091  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:31:48.261842  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-wmlmb" [3feea290-b9b1-42e2-84c4-f6ef0e034f58] Running
E0210 11:31:49.544081  428547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/default-k8s-diff-port-732540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004247924s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-632332 "pgrep -a kubelet"
I0210 11:32:08.635882  428547 config.go:182] Loaded profile config "kubenet-632332": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-632332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-svmgk" [d2bd9288-e853-4da6-a5c9-987218c9bf65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-svmgk" [d2bd9288-e853-4da6-a5c9-987218c9bf65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003044689s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-632332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-632332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    

Test skip (34/338)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
214 TestKicCustomNetwork 0
215 TestKicExistingNetwork 0
216 TestKicCustomSubnet 0
217 TestKicStaticIP 0
249 TestChangeNoneUser 0
252 TestScheduledStopWindows 0
256 TestInsufficientStorage 0
260 TestMissingContainerUpgrade 0
269 TestStartStop/group/disable-driver-mounts 0.15
300 TestNetworkPlugins/group/cilium 3.86
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-863528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-863528
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-632332 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-632332" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-421267/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.120:8443
name: cert-expiration-120520
contexts:
- context:
cluster: cert-expiration-120520
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-120520
name: cert-expiration-120520
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-120520
user:
client-certificate: /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/cert-expiration-120520/client.crt
client-key: /home/jenkins/minikube-integration/20385-421267/.minikube/profiles/cert-expiration-120520/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-632332

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-632332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632332"

                                                
                                                
----------------------- debugLogs end: cilium-632332 [took: 3.694611149s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-632332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-632332
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard