Test Report: KVM_Linux 20512

                    
                      48b5bd1b410deb6f0834786c8abc7687a18ec8ba:2025-04-14:39137
                    
                

Test fail (1/344)

Order failed test Duration
245 TestMultiNode/serial/RestartMultiNode 84.01
x
+
TestMultiNode/serial/RestartMultiNode (84.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0414 14:25:32.139132  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m23.760847776s)

                                                
                                                
-- stdout --
	* [multinode-185794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "multinode-185794" primary control-plane node in "multinode-185794" cluster
	* Restarting existing kvm2 VM for "multinode-185794" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:24:32.296320  685943 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:24:32.296568  685943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:24:32.296577  685943 out.go:358] Setting ErrFile to fd 2...
	I0414 14:24:32.296581  685943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:24:32.296752  685943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 14:24:32.297278  685943 out.go:352] Setting JSON to false
	I0414 14:24:32.298228  685943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":29223,"bootTime":1744611449,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:24:32.298290  685943 start.go:139] virtualization: kvm guest
	I0414 14:24:32.300170  685943 out.go:177] * [multinode-185794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:24:32.301448  685943 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:24:32.301444  685943 notify.go:220] Checking for updates...
	I0414 14:24:32.303828  685943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:24:32.305036  685943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	I0414 14:24:32.306127  685943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	I0414 14:24:32.307098  685943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:24:32.308091  685943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:24:32.309479  685943 config.go:182] Loaded profile config "multinode-185794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 14:24:32.309843  685943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:24:32.309901  685943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:24:32.326380  685943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0414 14:24:32.326970  685943 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:24:32.327634  685943 main.go:141] libmachine: Using API Version  1
	I0414 14:24:32.327672  685943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:24:32.328047  685943 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:24:32.328246  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:32.328480  685943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:24:32.328816  685943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:24:32.328871  685943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:24:32.344288  685943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0414 14:24:32.344777  685943 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:24:32.345337  685943 main.go:141] libmachine: Using API Version  1
	I0414 14:24:32.345360  685943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:24:32.345670  685943 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:24:32.345839  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:32.382663  685943 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:24:32.383858  685943 start.go:297] selected driver: kvm2
	I0414 14:24:32.383877  685943 start.go:901] validating driver "kvm2" against &{Name:multinode-185794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNam
e:multinode-185794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.75 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:f
alse metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:24:32.384007  685943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:24:32.384350  685943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:24:32.384424  685943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-652075/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:24:32.400421  685943 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:24:32.401202  685943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:24:32.401245  685943 cni.go:84] Creating CNI manager for ""
	I0414 14:24:32.401287  685943 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0414 14:24:32.401365  685943 start.go:340] cluster config:
	{Name:multinode-185794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-185794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.75 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driv
er-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:24:32.401501  685943 iso.go:125] acquiring lock: {Name:mk31812832bbbb744b9a661285e7c7972432ea16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:24:32.404099  685943 out.go:177] * Starting "multinode-185794" primary control-plane node in "multinode-185794" cluster
	I0414 14:24:32.405297  685943 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0414 14:24:32.405339  685943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0414 14:24:32.405349  685943 cache.go:56] Caching tarball of preloaded images
	I0414 14:24:32.405465  685943 preload.go:172] Found /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:24:32.405479  685943 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0414 14:24:32.405618  685943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/multinode-185794/config.json ...
	I0414 14:24:32.405812  685943 start.go:360] acquireMachinesLock for multinode-185794: {Name:mk9c6cfa0e29a56fc46c94c59cf5ffe9bb360df2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:24:32.405865  685943 start.go:364] duration metric: took 31.854µs to acquireMachinesLock for "multinode-185794"
	I0414 14:24:32.405879  685943 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:24:32.405887  685943 fix.go:54] fixHost starting: 
	I0414 14:24:32.406166  685943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:24:32.406200  685943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:24:32.421699  685943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
	I0414 14:24:32.422149  685943 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:24:32.422566  685943 main.go:141] libmachine: Using API Version  1
	I0414 14:24:32.422587  685943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:24:32.422933  685943 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:24:32.423131  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:32.423315  685943 main.go:141] libmachine: (multinode-185794) Calling .GetState
	I0414 14:24:32.425030  685943 fix.go:112] recreateIfNeeded on multinode-185794: state=Stopped err=<nil>
	I0414 14:24:32.425056  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	W0414 14:24:32.425207  685943 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:24:32.426831  685943 out.go:177] * Restarting existing kvm2 VM for "multinode-185794" ...
	I0414 14:24:32.427917  685943 main.go:141] libmachine: (multinode-185794) Calling .Start
	I0414 14:24:32.428094  685943 main.go:141] libmachine: (multinode-185794) starting domain...
	I0414 14:24:32.428118  685943 main.go:141] libmachine: (multinode-185794) ensuring networks are active...
	I0414 14:24:32.428870  685943 main.go:141] libmachine: (multinode-185794) Ensuring network default is active
	I0414 14:24:32.429194  685943 main.go:141] libmachine: (multinode-185794) Ensuring network mk-multinode-185794 is active
	I0414 14:24:32.429598  685943 main.go:141] libmachine: (multinode-185794) getting domain XML...
	I0414 14:24:32.430295  685943 main.go:141] libmachine: (multinode-185794) creating domain...
	I0414 14:24:33.656465  685943 main.go:141] libmachine: (multinode-185794) waiting for IP...
	I0414 14:24:33.657371  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:33.657691  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:33.657804  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:33.657702  685980 retry.go:31] will retry after 232.613535ms: waiting for domain to come up
	I0414 14:24:33.892514  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:33.893012  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:33.893039  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:33.892989  685980 retry.go:31] will retry after 383.114871ms: waiting for domain to come up
	I0414 14:24:34.277559  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:34.278009  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:34.278085  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:34.277977  685980 retry.go:31] will retry after 433.749538ms: waiting for domain to come up
	I0414 14:24:34.713608  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:34.714052  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:34.714069  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:34.714019  685980 retry.go:31] will retry after 472.018858ms: waiting for domain to come up
	I0414 14:24:35.187735  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:35.188126  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:35.188158  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:35.188060  685980 retry.go:31] will retry after 673.400984ms: waiting for domain to come up
	I0414 14:24:35.862738  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:35.863227  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:35.863247  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:35.863193  685980 retry.go:31] will retry after 923.336117ms: waiting for domain to come up
	I0414 14:24:36.788282  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:36.788659  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:36.788689  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:36.788600  685980 retry.go:31] will retry after 1.136758576s: waiting for domain to come up
	I0414 14:24:37.926786  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:37.927246  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:37.927271  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:37.927183  685980 retry.go:31] will retry after 1.19877191s: waiting for domain to come up
	I0414 14:24:39.127736  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:39.128151  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:39.128176  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:39.128121  685980 retry.go:31] will retry after 1.846405888s: waiting for domain to come up
	I0414 14:24:40.976570  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:40.977031  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:40.977065  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:40.976979  685980 retry.go:31] will retry after 1.553555796s: waiting for domain to come up
	I0414 14:24:42.531874  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:42.532401  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:42.532478  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:42.532395  685980 retry.go:31] will retry after 1.941296316s: waiting for domain to come up
	I0414 14:24:44.476430  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:44.476906  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:44.476972  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:44.476872  685980 retry.go:31] will retry after 3.039598021s: waiting for domain to come up
	I0414 14:24:47.518016  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:47.518473  685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
	I0414 14:24:47.518498  685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:47.518433  685980 retry.go:31] will retry after 3.265785149s: waiting for domain to come up
	I0414 14:24:50.788059  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.788450  685943 main.go:141] libmachine: (multinode-185794) found domain IP: 192.168.39.164
	I0414 14:24:50.788479  685943 main.go:141] libmachine: (multinode-185794) reserving static IP address...
	I0414 14:24:50.788512  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has current primary IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.788971  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "multinode-185794", mac: "52:54:00:92:f4:1e", ip: "192.168.39.164"} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:50.788997  685943 main.go:141] libmachine: (multinode-185794) DBG | skip adding static IP to network mk-multinode-185794 - found existing host DHCP lease matching {name: "multinode-185794", mac: "52:54:00:92:f4:1e", ip: "192.168.39.164"}
	I0414 14:24:50.789012  685943 main.go:141] libmachine: (multinode-185794) reserved static IP address 192.168.39.164 for domain multinode-185794
	I0414 14:24:50.789029  685943 main.go:141] libmachine: (multinode-185794) waiting for SSH...
	I0414 14:24:50.789046  685943 main.go:141] libmachine: (multinode-185794) DBG | Getting to WaitForSSH function...
	I0414 14:24:50.791630  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.792029  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:50.792073  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.792183  685943 main.go:141] libmachine: (multinode-185794) DBG | Using SSH client type: external
	I0414 14:24:50.792208  685943 main.go:141] libmachine: (multinode-185794) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa (-rw-------)
	I0414 14:24:50.792249  685943 main.go:141] libmachine: (multinode-185794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:24:50.792261  685943 main.go:141] libmachine: (multinode-185794) DBG | About to run SSH command:
	I0414 14:24:50.792315  685943 main.go:141] libmachine: (multinode-185794) DBG | exit 0
	I0414 14:24:50.915547  685943 main.go:141] libmachine: (multinode-185794) DBG | SSH cmd err, output: <nil>: 
	I0414 14:24:50.915936  685943 main.go:141] libmachine: (multinode-185794) Calling .GetConfigRaw
	I0414 14:24:50.916601  685943 main.go:141] libmachine: (multinode-185794) Calling .GetIP
	I0414 14:24:50.919416  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.919758  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:50.919785  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.920148  685943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/multinode-185794/config.json ...
	I0414 14:24:50.920391  685943 machine.go:93] provisionDockerMachine start ...
	I0414 14:24:50.920414  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:50.920631  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:50.923251  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.923678  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:50.923721  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:50.923849  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:50.924019  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:50.924209  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:50.924339  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:50.924518  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:50.924780  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:50.924795  685943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:24:51.031525  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:24:51.031572  685943 main.go:141] libmachine: (multinode-185794) Calling .GetMachineName
	I0414 14:24:51.031908  685943 buildroot.go:166] provisioning hostname "multinode-185794"
	I0414 14:24:51.031937  685943 main.go:141] libmachine: (multinode-185794) Calling .GetMachineName
	I0414 14:24:51.032164  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.035070  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.035478  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.035519  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.035622  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:51.035831  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.035965  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.036139  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:51.036371  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:51.036577  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:51.036590  685943 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-185794 && echo "multinode-185794" | sudo tee /etc/hostname
	I0414 14:24:51.152485  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-185794
	
	I0414 14:24:51.152514  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.155422  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.155801  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.155840  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.156078  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:51.156291  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.156471  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.156599  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:51.156754  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:51.156973  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:51.156990  685943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-185794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-185794/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-185794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:24:51.267669  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:24:51.267706  685943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-652075/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-652075/.minikube}
	I0414 14:24:51.267731  685943 buildroot.go:174] setting up certificates
	I0414 14:24:51.267747  685943 provision.go:84] configureAuth start
	I0414 14:24:51.267771  685943 main.go:141] libmachine: (multinode-185794) Calling .GetMachineName
	I0414 14:24:51.268111  685943 main.go:141] libmachine: (multinode-185794) Calling .GetIP
	I0414 14:24:51.271330  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.271745  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.271782  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.271968  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.274700  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.275012  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.275042  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.275160  685943 provision.go:143] copyHostCerts
	I0414 14:24:51.275190  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem
	I0414 14:24:51.275225  685943 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem, removing ...
	I0414 14:24:51.275236  685943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem
	I0414 14:24:51.275356  685943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem (1078 bytes)
	I0414 14:24:51.275449  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem
	I0414 14:24:51.275470  685943 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem, removing ...
	I0414 14:24:51.275478  685943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem
	I0414 14:24:51.275507  685943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem (1123 bytes)
	I0414 14:24:51.275556  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem
	I0414 14:24:51.275574  685943 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem, removing ...
	I0414 14:24:51.275582  685943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem
	I0414 14:24:51.275605  685943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem (1675 bytes)
	I0414 14:24:51.275658  685943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca-key.pem org=jenkins.multinode-185794 san=[127.0.0.1 192.168.39.164 localhost minikube multinode-185794]
	I0414 14:24:51.480600  685943 provision.go:177] copyRemoteCerts
	I0414 14:24:51.480682  685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:24:51.480712  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.483468  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.483801  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.483828  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.484017  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:51.484211  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.484382  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:51.484518  685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
	I0414 14:24:51.564851  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:24:51.564932  685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 14:24:51.587348  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:24:51.587446  685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0414 14:24:51.609482  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:24:51.609548  685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:24:51.631179  685943 provision.go:87] duration metric: took 363.416349ms to configureAuth
	I0414 14:24:51.631208  685943 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:24:51.631422  685943 config.go:182] Loaded profile config "multinode-185794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 14:24:51.631448  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:51.631739  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.634356  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.634812  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.634846  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.634941  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:51.635152  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.635338  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.635481  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:51.635618  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:51.635833  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:51.635846  685943 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0414 14:24:51.740502  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0414 14:24:51.740532  685943 buildroot.go:70] root file system type: tmpfs
	I0414 14:24:51.740634  685943 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0414 14:24:51.740661  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.743433  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.743804  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.743853  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.744009  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:51.744225  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.744392  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.744539  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:51.744700  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:51.744970  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:51.745066  685943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0414 14:24:51.860251  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0414 14:24:51.860292  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:51.863152  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.863557  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:51.863592  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:51.863820  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:51.864063  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.864218  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:51.864390  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:51.864574  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:51.864782  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:51.864799  685943 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0414 14:24:53.726736  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0414 14:24:53.726768  685943 machine.go:96] duration metric: took 2.806361695s to provisionDockerMachine
	I0414 14:24:53.726780  685943 start.go:293] postStartSetup for "multinode-185794" (driver="kvm2")
	I0414 14:24:53.726791  685943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:24:53.726817  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:53.727195  685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:24:53.727242  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:53.730246  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.730651  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:53.730678  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.730844  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:53.731042  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:53.731227  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:53.731382  685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
	I0414 14:24:53.814301  685943 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:24:53.818382  685943 command_runner.go:130] > NAME=Buildroot
	I0414 14:24:53.818411  685943 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0414 14:24:53.818418  685943 command_runner.go:130] > ID=buildroot
	I0414 14:24:53.818426  685943 command_runner.go:130] > VERSION_ID=2023.02.9
	I0414 14:24:53.818434  685943 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0414 14:24:53.818509  685943 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:24:53.818527  685943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-652075/.minikube/addons for local assets ...
	I0414 14:24:53.818601  685943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-652075/.minikube/files for local assets ...
	I0414 14:24:53.818675  685943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/ssl/certs/6592492.pem -> 6592492.pem in /etc/ssl/certs
	I0414 14:24:53.818684  685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/ssl/certs/6592492.pem -> /etc/ssl/certs/6592492.pem
	I0414 14:24:53.818765  685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:24:53.827986  685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/ssl/certs/6592492.pem --> /etc/ssl/certs/6592492.pem (1708 bytes)
	I0414 14:24:53.850420  685943 start.go:296] duration metric: took 123.619928ms for postStartSetup
	I0414 14:24:53.850488  685943 fix.go:56] duration metric: took 21.444598561s for fixHost
	I0414 14:24:53.850522  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:53.853457  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.853879  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:53.853918  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.854053  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:53.854288  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:53.854448  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:53.854596  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:53.854751  685943 main.go:141] libmachine: Using SSH client type: native
	I0414 14:24:53.854988  685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0414 14:24:53.854999  685943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:24:53.960116  685943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640693.921110595
	
	I0414 14:24:53.960155  685943 fix.go:216] guest clock: 1744640693.921110595
	I0414 14:24:53.960166  685943 fix.go:229] Guest: 2025-04-14 14:24:53.921110595 +0000 UTC Remote: 2025-04-14 14:24:53.850494945 +0000 UTC m=+21.591876680 (delta=70.61565ms)
	I0414 14:24:53.960223  685943 fix.go:200] guest clock delta is within tolerance: 70.61565ms
	I0414 14:24:53.960233  685943 start.go:83] releasing machines lock for "multinode-185794", held for 21.554358718s
	I0414 14:24:53.960260  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:53.960576  685943 main.go:141] libmachine: (multinode-185794) Calling .GetIP
	I0414 14:24:53.963358  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.963796  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:53.963821  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.964011  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:53.964526  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:53.964692  685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:24:53.964805  685943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:24:53.964882  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:53.964889  685943 ssh_runner.go:195] Run: cat /version.json
	I0414 14:24:53.964911  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:24:53.967546  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.967682  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.967905  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:53.967931  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.968028  685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:24:53.968067  685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:24:53.968104  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:53.968283  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:24:53.968292  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:53.968448  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:53.968451  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:24:53.968599  685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
	I0414 14:24:53.968645  685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:24:53.968784  685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
	I0414 14:24:54.044486  685943 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0414 14:24:54.045290  685943 ssh_runner.go:195] Run: systemctl --version
	I0414 14:24:54.067527  685943 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0414 14:24:54.067660  685943 command_runner.go:130] > systemd 252 (252)
	I0414 14:24:54.067702  685943 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0414 14:24:54.067796  685943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:24:54.073135  685943 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0414 14:24:54.073335  685943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:24:54.073414  685943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:24:54.088465  685943 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0414 14:24:54.088511  685943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:24:54.088555  685943 start.go:495] detecting cgroup driver to use...
	I0414 14:24:54.088703  685943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:24:54.105918  685943 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0414 14:24:54.106255  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:24:54.116403  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:24:54.127493  685943 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:24:54.127565  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:24:54.137989  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:24:54.148233  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:24:54.158712  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:24:54.168791  685943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:24:54.178838  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:24:54.188729  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:24:54.198911  685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:24:54.208904  685943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:24:54.217745  685943 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:24:54.217831  685943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:24:54.217881  685943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:24:54.227791  685943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:24:54.236800  685943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:24:54.346893  685943 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:24:54.372285  685943 start.go:495] detecting cgroup driver to use...
	I0414 14:24:54.372400  685943 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0414 14:24:54.397396  685943 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0414 14:24:54.397426  685943 command_runner.go:130] > [Unit]
	I0414 14:24:54.397433  685943 command_runner.go:130] > Description=Docker Application Container Engine
	I0414 14:24:54.397439  685943 command_runner.go:130] > Documentation=https://docs.docker.com
	I0414 14:24:54.397448  685943 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0414 14:24:54.397456  685943 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0414 14:24:54.397463  685943 command_runner.go:130] > StartLimitBurst=3
	I0414 14:24:54.397470  685943 command_runner.go:130] > StartLimitIntervalSec=60
	I0414 14:24:54.397476  685943 command_runner.go:130] > [Service]
	I0414 14:24:54.397482  685943 command_runner.go:130] > Type=notify
	I0414 14:24:54.397487  685943 command_runner.go:130] > Restart=on-failure
	I0414 14:24:54.397496  685943 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0414 14:24:54.397510  685943 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0414 14:24:54.397517  685943 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0414 14:24:54.397526  685943 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0414 14:24:54.397536  685943 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0414 14:24:54.397547  685943 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0414 14:24:54.397559  685943 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0414 14:24:54.397574  685943 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0414 14:24:54.397584  685943 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0414 14:24:54.397594  685943 command_runner.go:130] > ExecStart=
	I0414 14:24:54.397608  685943 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0414 14:24:54.397617  685943 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0414 14:24:54.397625  685943 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0414 14:24:54.397631  685943 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0414 14:24:54.397635  685943 command_runner.go:130] > LimitNOFILE=infinity
	I0414 14:24:54.397639  685943 command_runner.go:130] > LimitNPROC=infinity
	I0414 14:24:54.397643  685943 command_runner.go:130] > LimitCORE=infinity
	I0414 14:24:54.397648  685943 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0414 14:24:54.397656  685943 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0414 14:24:54.397667  685943 command_runner.go:130] > TasksMax=infinity
	I0414 14:24:54.397677  685943 command_runner.go:130] > TimeoutStartSec=0
	I0414 14:24:54.397684  685943 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0414 14:24:54.397688  685943 command_runner.go:130] > Delegate=yes
	I0414 14:24:54.397694  685943 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0414 14:24:54.397700  685943 command_runner.go:130] > KillMode=process
	I0414 14:24:54.397703  685943 command_runner.go:130] > [Install]
	I0414 14:24:54.397714  685943 command_runner.go:130] > WantedBy=multi-user.target
	I0414 14:24:54.397782  685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:24:54.414252  685943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:24:54.440014  685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:24:54.453901  685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:24:54.467888  685943 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:24:54.494033  685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:24:54.508340  685943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:24:54.526606  685943 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0414 14:24:54.526881  685943 ssh_runner.go:195] Run: which cri-dockerd
	I0414 14:24:54.530695  685943 command_runner.go:130] > /usr/bin/cri-dockerd
	I0414 14:24:54.530849  685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0414 14:24:54.540271  685943 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0414 14:24:54.556266  685943 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0414 14:24:54.666442  685943 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0414 14:24:54.776234  685943 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0414 14:24:54.776400  685943 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0414 14:24:54.793573  685943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:24:54.907543  685943 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0414 14:25:55.981608  685943 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0414 14:25:55.981641  685943 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0414 14:25:55.982289  685943 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.074682565s)
	I0414 14:25:55.982387  685943 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0414 14:25:55.994919  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
	I0414 14:25:55.994961  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.197958776Z" level=info msg="Starting up"
	I0414 14:25:55.994988  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.198781505Z" level=info msg="containerd not running, starting managed containerd"
	I0414 14:25:55.995008  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.199605247Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	I0414 14:25:55.995028  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.226444569Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0414 14:25:55.995047  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.245941128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0414 14:25:55.995065  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246073498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0414 14:25:55.995079  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246159942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0414 14:25:55.995096  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246206873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995116  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246518954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0414 14:25:55.995134  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246640978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995170  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246855158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0414 14:25:55.995191  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246902606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995212  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246941808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0414 14:25:55.995228  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246977274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995247  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247198205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995267  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247528452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995311  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250227978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0414 14:25:55.995332  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250294640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0414 14:25:55.995387  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250472406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0414 14:25:55.995409  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250517948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0414 14:25:55.995426  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250822546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0414 14:25:55.995443  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250891126Z" level=info msg="metadata content store policy set" policy=shared
	I0414 14:25:55.995460  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252339266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0414 14:25:55.995478  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252452361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0414 14:25:55.995496  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252499682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0414 14:25:55.995516  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252587729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0414 14:25:55.995532  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252633684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0414 14:25:55.995551  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252726102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0414 14:25:55.995570  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253034215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0414 14:25:55.995588  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253155097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0414 14:25:55.995608  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253199587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0414 14:25:55.995626  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253243435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0414 14:25:55.995650  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253281902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995673  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253327396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995696  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253364887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995716  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253462959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995736  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253609526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995759  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253650827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995779  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253738201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995832  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253817076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0414 14:25:55.995851  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253923991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995869  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254006418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995888  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254044560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995904  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254132419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995923  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254174123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995941  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995960  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254334894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995979  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254427982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.995997  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254467066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996014  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254578827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996032  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254669466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996060  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254707212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996078  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254788877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996097  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254876725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0414 14:25:55.996115  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254977464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996133  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255064474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996151  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255106276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0414 14:25:55.996171  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255285853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0414 14:25:55.996195  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255390332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0414 14:25:55.996214  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255474877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0414 14:25:55.996237  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255517504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0414 14:25:55.996258  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255607339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0414 14:25:55.996276  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255715503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0414 14:25:55.996290  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255802263Z" level=info msg="NRI interface is disabled by configuration."
	I0414 14:25:55.996319  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256253750Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0414 14:25:55.996335  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256387496Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0414 14:25:55.996352  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256524253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0414 14:25:55.996369  685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256620219Z" level=info msg="containerd successfully booted in 0.031733s"
	I0414 14:25:55.996393  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.227523866Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0414 14:25:55.996409  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.284469372Z" level=info msg="Loading containers: start."
	I0414 14:25:55.996447  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.495795420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0414 14:25:55.996470  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.574685843Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0414 14:25:55.996486  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.638462870Z" level=info msg="Loading containers: done."
	I0414 14:25:55.996507  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.655995821Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0414 14:25:55.996522  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656090385Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0414 14:25:55.996544  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656144269Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0414 14:25:55.996557  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656591352Z" level=info msg="Daemon has completed initialization"
	I0414 14:25:55.996570  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685531817Z" level=info msg="API listen on [::]:2376"
	I0414 14:25:55.996584  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685586150Z" level=info msg="API listen on /var/run/docker.sock"
	I0414 14:25:55.996598  685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 systemd[1]: Started Docker Application Container Engine.
	I0414 14:25:55.996615  685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.883173175Z" level=info msg="Processing signal 'terminated'"
	I0414 14:25:55.996630  685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.884830278Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0414 14:25:55.996648  685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885203639Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0414 14:25:55.996664  685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885222714Z" level=info msg="Daemon shutdown complete"
	I0414 14:25:55.996690  685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885272739Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0414 14:25:55.996749  685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 systemd[1]: Stopping Docker Application Container Engine...
	I0414 14:25:55.996761  685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 systemd[1]: docker.service: Deactivated successfully.
	I0414 14:25:55.996767  685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 systemd[1]: Stopped Docker Application Container Engine.
	I0414 14:25:55.996773  685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
	I0414 14:25:55.996780  685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 dockerd[875]: time="2025-04-14T14:24:55.924289115Z" level=info msg="Starting up"
	I0414 14:25:55.996798  685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 dockerd[875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0414 14:25:55.996812  685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0414 14:25:55.996822  685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0414 14:25:55.996834  685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 systemd[1]: Failed to start Docker Application Container Engine.
	I0414 14:25:56.003146  685943 out.go:201] 
	W0414 14:25:56.004701  685943 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 14 14:24:52 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
	Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.197958776Z" level=info msg="Starting up"
	Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.198781505Z" level=info msg="containerd not running, starting managed containerd"
	Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.199605247Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.226444569Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.245941128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246073498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246159942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246206873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246518954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246640978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246855158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246902606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246941808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246977274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247198205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247528452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250227978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250294640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250472406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250517948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250822546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250891126Z" level=info msg="metadata content store policy set" policy=shared
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252339266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252452361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252499682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252587729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252633684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252726102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253034215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253155097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253199587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253243435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253281902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253327396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253364887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253462959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253609526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253650827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253738201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253817076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253923991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254006418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254044560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254132419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254174123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254334894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254427982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254467066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254578827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254669466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254707212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254788877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254876725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254977464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255064474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255106276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255285853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255390332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255474877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255517504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255607339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255715503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255802263Z" level=info msg="NRI interface is disabled by configuration."
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256253750Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256387496Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256524253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256620219Z" level=info msg="containerd successfully booted in 0.031733s"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.227523866Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.284469372Z" level=info msg="Loading containers: start."
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.495795420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.574685843Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.638462870Z" level=info msg="Loading containers: done."
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.655995821Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656090385Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656144269Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656591352Z" level=info msg="Daemon has completed initialization"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685531817Z" level=info msg="API listen on [::]:2376"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685586150Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 14 14:24:53 multinode-185794 systemd[1]: Started Docker Application Container Engine.
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.883173175Z" level=info msg="Processing signal 'terminated'"
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.884830278Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885203639Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885222714Z" level=info msg="Daemon shutdown complete"
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885272739Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 14 14:24:54 multinode-185794 systemd[1]: Stopping Docker Application Container Engine...
	Apr 14 14:24:55 multinode-185794 systemd[1]: docker.service: Deactivated successfully.
	Apr 14 14:24:55 multinode-185794 systemd[1]: Stopped Docker Application Container Engine.
	Apr 14 14:24:55 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
	Apr 14 14:24:55 multinode-185794 dockerd[875]: time="2025-04-14T14:24:55.924289115Z" level=info msg="Starting up"
	Apr 14 14:25:55 multinode-185794 dockerd[875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 14 14:25:55 multinode-185794 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 14 14:24:52 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
	Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.197958776Z" level=info msg="Starting up"
	Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.198781505Z" level=info msg="containerd not running, starting managed containerd"
	Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.199605247Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.226444569Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.245941128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246073498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246159942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246206873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246518954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246640978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246855158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246902606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246941808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246977274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247198205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247528452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250227978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250294640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250472406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250517948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250822546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250891126Z" level=info msg="metadata content store policy set" policy=shared
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252339266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252452361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252499682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252587729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252633684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252726102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253034215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253155097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253199587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253243435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253281902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253327396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253364887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253462959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253609526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253650827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253738201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253817076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253923991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254006418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254044560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254132419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254174123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254334894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254427982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254467066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254578827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254669466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254707212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254788877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254876725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254977464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255064474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255106276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255285853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255390332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255474877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255517504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255607339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255715503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255802263Z" level=info msg="NRI interface is disabled by configuration."
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256253750Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256387496Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256524253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256620219Z" level=info msg="containerd successfully booted in 0.031733s"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.227523866Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.284469372Z" level=info msg="Loading containers: start."
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.495795420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.574685843Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.638462870Z" level=info msg="Loading containers: done."
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.655995821Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656090385Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656144269Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656591352Z" level=info msg="Daemon has completed initialization"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685531817Z" level=info msg="API listen on [::]:2376"
	Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685586150Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 14 14:24:53 multinode-185794 systemd[1]: Started Docker Application Container Engine.
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.883173175Z" level=info msg="Processing signal 'terminated'"
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.884830278Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885203639Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885222714Z" level=info msg="Daemon shutdown complete"
	Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885272739Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 14 14:24:54 multinode-185794 systemd[1]: Stopping Docker Application Container Engine...
	Apr 14 14:24:55 multinode-185794 systemd[1]: docker.service: Deactivated successfully.
	Apr 14 14:24:55 multinode-185794 systemd[1]: Stopped Docker Application Container Engine.
	Apr 14 14:24:55 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
	Apr 14 14:24:55 multinode-185794 dockerd[875]: time="2025-04-14T14:24:55.924289115Z" level=info msg="Starting up"
	Apr 14 14:25:55 multinode-185794 dockerd[875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 14 14:25:55 multinode-185794 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0414 14:25:56.004765  685943 out.go:270] * 
	* 
	W0414 14:25:56.005707  685943 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:25:56.007352  685943 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-185794 -n multinode-185794
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-185794 -n multinode-185794: exit status 6 (225.352008ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 14:25:56.249285  686345 status.go:458] kubeconfig endpoint: get endpoint: "multinode-185794" does not appear in /home/jenkins/minikube-integration/20512-652075/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-185794" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartMultiNode (84.01s)

                                                
                                    

Test pass (309/344)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.19
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 4.15
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.15
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 91.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 219.64
29 TestAddons/serial/Volcano 43.8
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.53
35 TestAddons/parallel/Registry 15.86
36 TestAddons/parallel/Ingress 21.14
37 TestAddons/parallel/InspektorGadget 11.74
38 TestAddons/parallel/MetricsServer 5.68
40 TestAddons/parallel/CSI 46.36
41 TestAddons/parallel/Headlamp 19.42
42 TestAddons/parallel/CloudSpanner 5.66
43 TestAddons/parallel/LocalPath 55.21
44 TestAddons/parallel/NvidiaDevicePlugin 5.58
45 TestAddons/parallel/Yakd 11.35
47 TestAddons/StoppedEnableDisable 13.6
48 TestCertOptions 109.13
49 TestCertExpiration 327.45
50 TestDockerFlags 53.46
51 TestForceSystemdFlag 58.63
52 TestForceSystemdEnv 76.63
54 TestKVMDriverInstallOrUpdate 4.1
58 TestErrorSpam/setup 49.15
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.74
61 TestErrorSpam/pause 1.21
62 TestErrorSpam/unpause 1.36
63 TestErrorSpam/stop 16.06
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 93.22
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.8
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.32
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.19
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 40.4
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.99
86 TestFunctional/serial/LogsFileCmd 1.01
87 TestFunctional/serial/InvalidService 4.79
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 12.86
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 24.65
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 47.21
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.38
103 TestFunctional/parallel/MySQL 28.85
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.36
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.66
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.44
121 TestFunctional/parallel/ImageCommands/Setup 1.67
122 TestFunctional/parallel/DockerEnv/bash 0.86
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.17
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
142 TestFunctional/parallel/ServiceCmd/DeployApp 19.36
143 TestFunctional/parallel/ServiceCmd/List 0.51
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
146 TestFunctional/parallel/ProfileCmd/profile_list 0.34
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
149 TestFunctional/parallel/ServiceCmd/Format 0.3
150 TestFunctional/parallel/MountCmd/any-port 7.53
151 TestFunctional/parallel/ServiceCmd/URL 0.29
152 TestFunctional/parallel/MountCmd/specific-port 1.66
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 236.66
161 TestMultiControlPlane/serial/StartCluster 220.55
162 TestMultiControlPlane/serial/DeployApp 6.73
163 TestMultiControlPlane/serial/PingHostFromPods 1.28
164 TestMultiControlPlane/serial/AddWorkerNode 62.96
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
167 TestMultiControlPlane/serial/CopyFile 13.37
168 TestMultiControlPlane/serial/StopSecondaryNode 13.29
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 39.86
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 241.14
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.25
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
175 TestMultiControlPlane/serial/StopCluster 28.4
176 TestMultiControlPlane/serial/RestartCluster 145.22
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 81.68
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
182 TestImageBuild/serial/Setup 51.34
183 TestImageBuild/serial/NormalBuild 1.41
184 TestImageBuild/serial/BuildWithBuildArg 0.91
185 TestImageBuild/serial/BuildWithDockerIgnore 0.61
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
190 TestJSONOutput/start/Command 88.96
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.57
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.55
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 12.62
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.21
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 100.33
222 TestMountStart/serial/StartWithMountFirst 32.62
223 TestMountStart/serial/VerifyMountFirst 0.38
224 TestMountStart/serial/StartWithMountSecond 32.96
225 TestMountStart/serial/VerifyMountSecond 0.39
226 TestMountStart/serial/DeleteFirst 0.86
227 TestMountStart/serial/VerifyMountPostDelete 0.49
228 TestMountStart/serial/Stop 2.39
229 TestMountStart/serial/RestartStopped 24.88
230 TestMountStart/serial/VerifyMountPostStop 0.39
233 TestMultiNode/serial/FreshStart2Nodes 132.36
234 TestMultiNode/serial/DeployApp2Nodes 5.46
235 TestMultiNode/serial/PingHostFrom2Pods 0.83
236 TestMultiNode/serial/AddNode 55.42
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.6
239 TestMultiNode/serial/CopyFile 7.51
240 TestMultiNode/serial/StopNode 3.31
241 TestMultiNode/serial/StartAfterStop 42.37
242 TestMultiNode/serial/RestartKeepsNodes 189.4
243 TestMultiNode/serial/DeleteNode 2.37
244 TestMultiNode/serial/StopMultiNode 25.02
246 TestMultiNode/serial/ValidateNameConflict 50.87
251 TestPreload 189.55
253 TestScheduledStopUnix 127.13
254 TestSkaffold 125.96
257 TestRunningBinaryUpgrade 202.9
259 TestKubernetesUpgrade 185.09
261 TestStoppedBinaryUpgrade/Setup 0.58
270 TestPause/serial/Start 90.53
271 TestStoppedBinaryUpgrade/Upgrade 180.43
272 TestPause/serial/SecondStartNoReconfiguration 56.48
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
275 TestNoKubernetes/serial/StartWithK8s 68.33
276 TestPause/serial/Pause 0.64
277 TestPause/serial/VerifyStatus 0.27
278 TestPause/serial/Unpause 0.58
279 TestPause/serial/PauseAgain 0.69
280 TestPause/serial/DeletePaused 0.85
281 TestPause/serial/VerifyDeletedResources 0.65
282 TestNoKubernetes/serial/StartWithStopK8s 36.01
283 TestStoppedBinaryUpgrade/MinikubeLogs 2.2
284 TestNoKubernetes/serial/Start 49.25
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
286 TestNoKubernetes/serial/ProfileList 0.58
287 TestNoKubernetes/serial/Stop 2.48
288 TestNoKubernetes/serial/StartNoArgs 96.07
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
302 TestStartStop/group/old-k8s-version/serial/FirstStart 151.03
304 TestStartStop/group/embed-certs/serial/FirstStart 102.36
306 TestStartStop/group/no-preload/serial/FirstStart 103.45
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.28
309 TestStartStop/group/embed-certs/serial/DeployApp 9.64
310 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.27
313 TestStartStop/group/embed-certs/serial/Stop 13.39
314 TestStartStop/group/old-k8s-version/serial/Stop 13.4
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
316 TestStartStop/group/embed-certs/serial/SecondStart 316.11
317 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/old-k8s-version/serial/SecondStart 553.04
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
320 TestStartStop/group/no-preload/serial/DeployApp 10.33
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.33
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
324 TestStartStop/group/no-preload/serial/Stop 13.33
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 314.69
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
328 TestStartStop/group/no-preload/serial/SecondStart 329.07
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
332 TestStartStop/group/embed-certs/serial/Pause 2.5
334 TestStartStop/group/newest-cni/serial/FirstStart 65.85
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.53
339 TestNetworkPlugins/group/auto/Start 66.51
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
343 TestStartStop/group/no-preload/serial/Pause 2.52
344 TestNetworkPlugins/group/flannel/Start 87.93
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
347 TestStartStop/group/newest-cni/serial/Stop 13.35
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
349 TestStartStop/group/newest-cni/serial/SecondStart 48.42
350 TestNetworkPlugins/group/auto/KubeletFlags 0.28
351 TestNetworkPlugins/group/auto/NetCatPod 13.27
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.17
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/enable-default-cni/Start 96.64
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
359 TestStartStop/group/newest-cni/serial/Pause 2.98
360 TestNetworkPlugins/group/flannel/ControllerPod 6.01
361 TestNetworkPlugins/group/bridge/Start 112.03
362 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
363 TestNetworkPlugins/group/flannel/NetCatPod 10.21
364 TestNetworkPlugins/group/flannel/DNS 0.16
365 TestNetworkPlugins/group/flannel/Localhost 0.13
366 TestNetworkPlugins/group/flannel/HairPin 0.15
367 TestNetworkPlugins/group/kindnet/Start 95.02
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
372 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
373 TestStartStop/group/old-k8s-version/serial/Pause 2.68
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
377 TestNetworkPlugins/group/kubenet/Start 90.71
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
379 TestNetworkPlugins/group/bridge/NetCatPod 11.36
380 TestNetworkPlugins/group/custom-flannel/Start 89.55
381 TestNetworkPlugins/group/bridge/DNS 0.17
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.12
384 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
385 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
386 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
387 TestNetworkPlugins/group/calico/Start 109.33
388 TestNetworkPlugins/group/kindnet/DNS 0.19
389 TestNetworkPlugins/group/kindnet/Localhost 0.12
390 TestNetworkPlugins/group/kindnet/HairPin 0.13
391 TestNetworkPlugins/group/false/Start 93.18
392 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
393 TestNetworkPlugins/group/kubenet/NetCatPod 13.27
394 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
395 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
396 TestNetworkPlugins/group/kubenet/DNS 0.19
397 TestNetworkPlugins/group/kubenet/Localhost 0.15
398 TestNetworkPlugins/group/kubenet/HairPin 0.13
399 TestNetworkPlugins/group/custom-flannel/DNS 0.2
400 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
401 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
402 TestNetworkPlugins/group/calico/ControllerPod 6.01
403 TestNetworkPlugins/group/calico/KubeletFlags 0.21
404 TestNetworkPlugins/group/calico/NetCatPod 11.22
405 TestNetworkPlugins/group/false/KubeletFlags 0.22
406 TestNetworkPlugins/group/false/NetCatPod 10.24
407 TestNetworkPlugins/group/false/DNS 16.75
408 TestNetworkPlugins/group/calico/DNS 0.15
409 TestNetworkPlugins/group/calico/Localhost 0.15
410 TestNetworkPlugins/group/calico/HairPin 0.14
411 TestNetworkPlugins/group/false/Localhost 0.11
412 TestNetworkPlugins/group/false/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-138072 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-138072 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (10.188069563s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 13:45:12.473697  659249 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0414 13:45:12.473822  659249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-138072
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-138072: exit status 85 (65.463398ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-138072 | jenkins | v1.35.0 | 14 Apr 25 13:45 UTC |          |
	|         | -p download-only-138072        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:45:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:45:02.331353  659261 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:45:02.331515  659261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:45:02.331528  659261 out.go:358] Setting ErrFile to fd 2...
	I0414 13:45:02.331535  659261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:45:02.331723  659261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	W0414 13:45:02.331876  659261 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20512-652075/.minikube/config/config.json: open /home/jenkins/minikube-integration/20512-652075/.minikube/config/config.json: no such file or directory
	I0414 13:45:02.332499  659261 out.go:352] Setting JSON to true
	I0414 13:45:02.333542  659261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":26853,"bootTime":1744611449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:45:02.333669  659261 start.go:139] virtualization: kvm guest
	I0414 13:45:02.335908  659261 out.go:97] [download-only-138072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 13:45:02.336054  659261 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 13:45:02.336129  659261 notify.go:220] Checking for updates...
	I0414 13:45:02.337673  659261 out.go:169] MINIKUBE_LOCATION=20512
	I0414 13:45:02.339164  659261 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:45:02.340806  659261 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	I0414 13:45:02.342283  659261 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	I0414 13:45:02.343762  659261 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 13:45:02.346757  659261 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 13:45:02.347019  659261 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:45:02.384616  659261 out.go:97] Using the kvm2 driver based on user configuration
	I0414 13:45:02.384661  659261 start.go:297] selected driver: kvm2
	I0414 13:45:02.384670  659261 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:45:02.385078  659261 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:45:02.385162  659261 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-652075/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:45:02.402344  659261 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:45:02.402405  659261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:45:02.402937  659261 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 13:45:02.403070  659261 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 13:45:02.403110  659261 cni.go:84] Creating CNI manager for ""
	I0414 13:45:02.403162  659261 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0414 13:45:02.403248  659261 start.go:340] cluster config:
	{Name:download-only-138072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-138072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:45:02.403467  659261 iso.go:125] acquiring lock: {Name:mk31812832bbbb744b9a661285e7c7972432ea16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:45:02.405274  659261 out.go:97] Downloading VM boot image ...
	I0414 13:45:02.405303  659261 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20512-652075/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:45:08.489611  659261 out.go:97] Starting "download-only-138072" primary control-plane node in "download-only-138072" cluster
	I0414 13:45:08.489638  659261 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0414 13:45:08.517258  659261 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0414 13:45:08.517295  659261 cache.go:56] Caching tarball of preloaded images
	I0414 13:45:08.517467  659261 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0414 13:45:08.519381  659261 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 13:45:08.519407  659261 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0414 13:45:08.549092  659261 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0414 13:45:11.021631  659261 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0414 13:45:11.021732  659261 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0414 13:45:11.817471  659261 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0414 13:45:11.817859  659261 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/download-only-138072/config.json ...
	I0414 13:45:11.817897  659261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/download-only-138072/config.json: {Name:mk6b5606c96ffd5ca4fec296d3840b6856924288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:45:11.818086  659261 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0414 13:45:11.818278  659261 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20512-652075/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-138072 host does not exist
	  To start a cluster, run: "minikube start -p download-only-138072"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-138072
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-560453 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-560453 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=kvm2 : (4.146810929s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 13:45:16.977756  659249 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0414 13:45:16.977801  659249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-560453
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-560453: exit status 85 (65.282037ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-138072 | jenkins | v1.35.0 | 14 Apr 25 13:45 UTC |                     |
	|         | -p download-only-138072        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 13:45 UTC | 14 Apr 25 13:45 UTC |
	| delete  | -p download-only-138072        | download-only-138072 | jenkins | v1.35.0 | 14 Apr 25 13:45 UTC | 14 Apr 25 13:45 UTC |
	| start   | -o=json --download-only        | download-only-560453 | jenkins | v1.35.0 | 14 Apr 25 13:45 UTC |                     |
	|         | -p download-only-560453        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:45:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:45:12.873998  659458 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:45:12.874604  659458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:45:12.874626  659458 out.go:358] Setting ErrFile to fd 2...
	I0414 13:45:12.874634  659458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:45:12.875105  659458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 13:45:12.875946  659458 out.go:352] Setting JSON to true
	I0414 13:45:12.876833  659458 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":26864,"bootTime":1744611449,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:45:12.876948  659458 start.go:139] virtualization: kvm guest
	I0414 13:45:12.878627  659458 out.go:97] [download-only-560453] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:45:12.878793  659458 notify.go:220] Checking for updates...
	I0414 13:45:12.880276  659458 out.go:169] MINIKUBE_LOCATION=20512
	I0414 13:45:12.881563  659458 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:45:12.882763  659458 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	I0414 13:45:12.884216  659458 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	I0414 13:45:12.885426  659458 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-560453 host does not exist
	  To start a cluster, run: "minikube start -p download-only-560453"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-560453
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 13:45:17.599466  659249 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-987456 --alsologtostderr --binary-mirror http://127.0.0.1:34237 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-987456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-987456
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (91.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-289680 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-289680 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m31.031489594s)
helpers_test.go:175: Cleaning up "offline-docker-289680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-289680
--- PASS: TestOffline (91.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-404718
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-404718: exit status 85 (62.983615ms)

                                                
                                                
-- stdout --
	* Profile "addons-404718" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-404718"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-404718
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-404718: exit status 85 (61.752019ms)

                                                
                                                
-- stdout --
	* Profile "addons-404718" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-404718"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (219.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-404718 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-404718 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m39.63946419s)
--- PASS: TestAddons/Setup (219.64s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 19.586432ms
addons_test.go:807: volcano-scheduler stabilized in 19.706716ms
addons_test.go:815: volcano-admission stabilized in 19.913611ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-zh8bp" [86b5a16e-d3e5-470c-bdec-c682c9081879] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003599947s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-shmkt" [fc005ad7-2bf0-4d90-8bc6-98616c4293f1] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003643807s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-ksd7p" [9d5f875a-83d7-4a12-9f31-68d8c4f2c96a] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004417626s
addons_test.go:842: (dbg) Run:  kubectl --context addons-404718 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-404718 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-404718 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2b1bab3c-8521-441b-8a88-28fa3e991c04] Pending
helpers_test.go:344: "test-job-nginx-0" [2b1bab3c-8521-441b-8a88-28fa3e991c04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2b1bab3c-8521-441b-8a88-28fa3e991c04] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.003782234s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable volcano --alsologtostderr -v=1: (11.304603235s)
--- PASS: TestAddons/serial/Volcano (43.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-404718 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-404718 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-404718 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-404718 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fddccf79-6077-4810-b9f8-f6c1891d76ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fddccf79-6077-4810-b9f8-f6c1891d76ad] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003604719s
addons_test.go:633: (dbg) Run:  kubectl --context addons-404718 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-404718 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-404718 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.859947ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-6x8nb" [edc216db-4830-4832-b354-bb24d6a800ee] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003545167s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bcmdt" [a13ca5df-2191-443c-92b8-e0766733dbda] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004728267s
addons_test.go:331: (dbg) Run:  kubectl --context addons-404718 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-404718 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-404718 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.141123823s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 ip
2025/04/14 13:50:14 [DEBUG] GET http://192.168.39.57:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-404718 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-404718 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-404718 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b650795a-ce30-47d3-b90d-934b00d61441] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b650795a-ce30-47d3-b90d-934b00d61441] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004559007s
I0414 13:50:30.270991  659249 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-404718 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.57
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable ingress-dns --alsologtostderr -v=1: (1.313999699s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable ingress --alsologtostderr -v=1: (7.641596666s)
--- PASS: TestAddons/parallel/Ingress (21.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9jkmn" [fc918d18-205f-4ad5-8bd6-84ea327e72ee] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003147199s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable inspektor-gadget --alsologtostderr -v=1: (5.732065755s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.907627ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-sq57g" [3772cda5-e237-4725-9523-d5aee3a981c8] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00590213s
addons_test.go:402: (dbg) Run:  kubectl --context addons-404718 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 13:50:15.226243  659249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 13:50:15.230087  659249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 13:50:15.230116  659249 kapi.go:107] duration metric: took 3.888018ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.899478ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-404718 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-404718 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0ffe01e4-1f72-4d21-8d05-f187abafa6a6] Pending
helpers_test.go:344: "task-pv-pod" [0ffe01e4-1f72-4d21-8d05-f187abafa6a6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0ffe01e4-1f72-4d21-8d05-f187abafa6a6] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003233943s
addons_test.go:511: (dbg) Run:  kubectl --context addons-404718 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-404718 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-404718 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-404718 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-404718 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-404718 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-404718 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5b1117e4-2a6f-4410-bf93-495f4a967121] Pending
helpers_test.go:344: "task-pv-pod-restore" [5b1117e4-2a6f-4410-bf93-495f4a967121] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5b1117e4-2a6f-4410-bf93-495f4a967121] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00343352s
addons_test.go:553: (dbg) Run:  kubectl --context addons-404718 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-404718 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-404718 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.729190562s)
--- PASS: TestAddons/parallel/CSI (46.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-404718 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-6jjw9" [58d15717-212f-41ce-b515-15edaa57a6d8] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-6jjw9" [58d15717-212f-41ce-b515-15edaa57a6d8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-6jjw9" [58d15717-212f-41ce-b515-15edaa57a6d8] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004387944s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable headlamp --alsologtostderr -v=1: (5.667818466s)
--- PASS: TestAddons/parallel/Headlamp (19.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-kx849" [fb989cac-cc93-4129-80b8-5bee1066965c] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004032121s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-404718 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-404718 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1330a87c-6aa8-4703-9737-2068d6cc37d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1330a87c-6aa8-4703-9737-2068d6cc37d5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1330a87c-6aa8-4703-9737-2068d6cc37d5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006012268s
addons_test.go:906: (dbg) Run:  kubectl --context addons-404718 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 ssh "cat /opt/local-path-provisioner/pvc-a14da0d6-605e-47a9-a219-6de5b63b1187_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-404718 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-404718 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.292228472s)
--- PASS: TestAddons/parallel/LocalPath (55.21s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4dqrq" [65475eb8-b730-4a27-86e7-2077a2519da5] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004668927s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-4rbbq" [6b91f791-0d77-40b4-ae00-33ffbb57c7ff] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004724898s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-404718 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-404718 addons disable yakd --alsologtostderr -v=1: (6.342598111s)
--- PASS: TestAddons/parallel/Yakd (11.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-404718
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-404718: (13.300535824s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-404718
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-404718
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-404718
--- PASS: TestAddons/StoppedEnableDisable (13.60s)

                                                
                                    
x
+
TestCertOptions (109.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-659376 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-659376 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m47.062676035s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-659376 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-659376 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-659376 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-659376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-659376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-659376: (1.553467199s)
--- PASS: TestCertOptions (109.13s)

                                                
                                    
x
+
TestCertExpiration (327.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-014088 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-014088 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m27.083797191s)
E0414 14:39:01.536412  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:39:04.098028  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:39:09.220379  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:39:19.462581  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-014088 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-014088 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (59.202688475s)
helpers_test.go:175: Cleaning up "cert-expiration-014088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-014088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-014088: (1.163822291s)
--- PASS: TestCertExpiration (327.45s)

                                                
                                    
x
+
TestDockerFlags (53.46s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-087613 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-087613 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (52.040658231s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-087613 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-087613 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-087613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-087613
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-087613: (1.010306663s)
--- PASS: TestDockerFlags (53.46s)

                                                
                                    
x
+
TestForceSystemdFlag (58.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-935478 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-935478 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (57.550830961s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-935478 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-935478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-935478
--- PASS: TestForceSystemdFlag (58.63s)

                                                
                                    
x
+
TestForceSystemdEnv (76.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-748750 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-748750 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m15.364440072s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-748750 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-748750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-748750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-748750: (1.013541683s)
--- PASS: TestForceSystemdEnv (76.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 14:39:57.086461  659249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:39:57.086622  659249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 14:39:57.126138  659249 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 14:39:57.126348  659249 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 14:39:57.126443  659249 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2887785730/001/docker-machine-driver-kvm2
I0414 14:39:57.379236  659249 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2887785730/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005dc0b8 gz:0xc0005dc160 tar:0xc0005dc100 tar.bz2:0xc0005dc110 tar.gz:0xc0005dc120 tar.xz:0xc0005dc130 tar.zst:0xc0005dc140 tbz2:0xc0005dc110 tgz:0xc0005dc120 txz:0xc0005dc130 tzst:0xc0005dc140 xz:0xc0005dc168 zip:0xc0005dc170 zst:0xc0005dc190] Getters:map[file:0xc001840d20 http:0xc001c32690 https:0xc001c326e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 14:39:57.379315  659249 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2887785730/001/docker-machine-driver-kvm2
I0414 14:39:59.324060  659249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:39:59.324168  659249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 14:39:59.358965  659249 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 14:39:59.359002  659249 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 14:39:59.359074  659249 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 14:39:59.359113  659249 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2887785730/002/docker-machine-driver-kvm2
I0414 14:39:59.413871  659249 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2887785730/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005dc0b8 gz:0xc0005dc160 tar:0xc0005dc100 tar.bz2:0xc0005dc110 tar.gz:0xc0005dc120 tar.xz:0xc0005dc130 tar.zst:0xc0005dc140 tbz2:0xc0005dc110 tgz:0xc0005dc120 txz:0xc0005dc130 tzst:0xc0005dc140 xz:0xc0005dc168 zip:0xc0005dc170 zst:0xc0005dc190] Getters:map[file:0xc001cfd8f0 http:0xc0001c65f0 https:0xc0001c6640] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 14:39:59.413932  659249 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2887785730/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.10s)

                                                
                                    
x
+
TestErrorSpam/setup (49.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-739410 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-739410 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-739410 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-739410 --driver=kvm2 : (49.150929903s)
--- PASS: TestErrorSpam/setup (49.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (16.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 stop: (12.492098636s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 stop: (2.059831741s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-739410 --log_dir /tmp/nospam-739410 stop: (1.507594027s)
--- PASS: TestErrorSpam/stop (16.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/test/nested/copy/659249/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-625084 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E0414 13:53:57.935367  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:57.941810  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:57.953181  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:57.974639  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:58.016116  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:58.097613  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:58.259226  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:58.580984  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-625084 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m33.215740428s)
--- PASS: TestFunctional/serial/StartWithProxy (93.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 13:53:58.677309  659249 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-625084 --alsologtostderr -v=8
E0414 13:53:59.222925  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:00.504671  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:03.067374  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:08.188998  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:18.431386  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-625084 --alsologtostderr -v=8: (39.799695079s)
functional_test.go:680: soft start took 39.800371988s for "functional-625084" cluster.
I0414 13:54:38.477344  659249 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (39.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-625084 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cache add registry.k8s.io/pause:3.1
E0414 13:54:38.913592  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-625084 /tmp/TestFunctionalserialCacheCmdcacheadd_local1076790630/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cache add minikube-local-cache-test:functional-625084
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cache delete minikube-local-cache-test:functional-625084
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-625084
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.457805ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 kubectl -- --context functional-625084 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-625084 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-625084 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 13:55:19.876086  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-625084 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.400941474s)
functional_test.go:778: restart took 40.401103081s for "functional-625084" cluster.
I0414 13:55:24.503134  659249 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (40.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-625084 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 logs --file /tmp/TestFunctionalserialLogsFileCmd494022470/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-625084 logs --file /tmp/TestFunctionalserialLogsFileCmd494022470/001/logs.txt: (1.008627493s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-625084 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-625084
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-625084: exit status 115 (288.915785ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.183:30401 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-625084 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-625084 delete -f testdata/invalidsvc.yaml: (1.300048373s)
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 config get cpus: exit status 14 (55.957263ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 config get cpus: exit status 14 (54.043336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-625084 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-625084 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 667595: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-625084 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-625084 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (158.843532ms)

                                                
                                                
-- stdout --
	* [functional-625084] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:56:00.995721  667322 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:56:00.995863  667322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:56:00.995882  667322 out.go:358] Setting ErrFile to fd 2...
	I0414 13:56:00.995888  667322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:56:00.996155  667322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 13:56:00.996686  667322 out.go:352] Setting JSON to false
	I0414 13:56:00.997943  667322 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":27512,"bootTime":1744611449,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:56:00.998071  667322 start.go:139] virtualization: kvm guest
	I0414 13:56:01.000072  667322 out.go:177] * [functional-625084] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:56:01.001817  667322 notify.go:220] Checking for updates...
	I0414 13:56:01.001847  667322 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 13:56:01.003329  667322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:56:01.004623  667322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	I0414 13:56:01.006041  667322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	I0414 13:56:01.007361  667322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:56:01.008590  667322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:56:01.010362  667322 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 13:56:01.010763  667322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 13:56:01.010859  667322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:56:01.028549  667322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0414 13:56:01.029167  667322 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:56:01.029734  667322 main.go:141] libmachine: Using API Version  1
	I0414 13:56:01.029985  667322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:56:01.030324  667322 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:56:01.030496  667322 main.go:141] libmachine: (functional-625084) Calling .DriverName
	I0414 13:56:01.030757  667322 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:56:01.031207  667322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 13:56:01.031264  667322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:56:01.051433  667322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0414 13:56:01.051905  667322 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:56:01.052602  667322 main.go:141] libmachine: Using API Version  1
	I0414 13:56:01.052628  667322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:56:01.053362  667322 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:56:01.053904  667322 main.go:141] libmachine: (functional-625084) Calling .DriverName
	I0414 13:56:01.094082  667322 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 13:56:01.095462  667322 start.go:297] selected driver: kvm2
	I0414 13:56:01.095478  667322 start.go:901] validating driver "kvm2" against &{Name:functional-625084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-625084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:56:01.095577  667322 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:56:01.097575  667322 out.go:201] 
	W0414 13:56:01.098678  667322 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 13:56:01.099789  667322 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-625084 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-625084 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-625084 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (158.630794ms)

                                                
                                                
-- stdout --
	* [functional-625084] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:56:00.843480  667275 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:56:00.843643  667275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:56:00.843657  667275 out.go:358] Setting ErrFile to fd 2...
	I0414 13:56:00.843664  667275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:56:00.844009  667275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 13:56:00.844596  667275 out.go:352] Setting JSON to false
	I0414 13:56:00.845720  667275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":27512,"bootTime":1744611449,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:56:00.845817  667275 start.go:139] virtualization: kvm guest
	I0414 13:56:00.847828  667275 out.go:177] * [functional-625084] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 13:56:00.849609  667275 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 13:56:00.849655  667275 notify.go:220] Checking for updates...
	I0414 13:56:00.852224  667275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:56:00.853438  667275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	I0414 13:56:00.854699  667275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	I0414 13:56:00.856024  667275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:56:00.857382  667275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:56:00.859219  667275 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 13:56:00.859852  667275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 13:56:00.859975  667275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:56:00.877926  667275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0414 13:56:00.878388  667275 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:56:00.878938  667275 main.go:141] libmachine: Using API Version  1
	I0414 13:56:00.878969  667275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:56:00.879416  667275 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:56:00.879642  667275 main.go:141] libmachine: (functional-625084) Calling .DriverName
	I0414 13:56:00.879992  667275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:56:00.880420  667275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 13:56:00.880468  667275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:56:00.897389  667275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0414 13:56:00.897828  667275 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:56:00.898368  667275 main.go:141] libmachine: Using API Version  1
	I0414 13:56:00.898410  667275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:56:00.898814  667275 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:56:00.899000  667275 main.go:141] libmachine: (functional-625084) Calling .DriverName
	I0414 13:56:00.934057  667275 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 13:56:00.935573  667275 start.go:297] selected driver: kvm2
	I0414 13:56:00.935591  667275 start.go:901] validating driver "kvm2" against &{Name:functional-625084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-625084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:56:00.935728  667275 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:56:00.938006  667275 out.go:201] 
	W0414 13:56:00.939378  667275 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 13:56:00.940851  667275 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-625084 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-625084 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-4c6mb" [4385d1ec-9664-46c7-b052-353b45c944b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-4c6mb" [4385d1ec-9664-46c7-b052-353b45c944b5] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.137902149s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.183:30390
functional_test.go:1692: http://192.168.39.183:30390: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-4c6mb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.183:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.183:30390
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0b1047dd-abc0-441f-80fd-6dfc9c2d0e65] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004699456s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-625084 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-625084 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-625084 get pvc myclaim -o=json
I0414 13:55:38.759658  659249 retry.go:31] will retry after 2.275443859s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2b44c008-86c1-4e85-9065-061f52d9a1b8 ResourceVersion:754 Generation:0 CreationTimestamp:2025-04-14 13:55:38 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-2b44c008-86c1-4e85-9065-061f52d9a1b8 StorageClassName:0xc001a9e100 VolumeMode:0xc001a9e110 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-625084 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-625084 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3034180a-bc10-412f-9f3a-e5fd5a2a36ee] Pending
helpers_test.go:344: "sp-pod" [3034180a-bc10-412f-9f3a-e5fd5a2a36ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3034180a-bc10-412f-9f3a-e5fd5a2a36ee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003974345s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-625084 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-625084 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-625084 delete -f testdata/storage-provisioner/pod.yaml: (1.04147049s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-625084 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a624563e-89fe-4900-9861-88ebf8bcf723] Pending
helpers_test.go:344: "sp-pod" [a624563e-89fe-4900-9861-88ebf8bcf723] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a624563e-89fe-4900-9861-88ebf8bcf723] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003587432s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-625084 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh -n functional-625084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cp functional-625084:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1381522811/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh -n functional-625084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh -n functional-625084 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-625084 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-nxpl8" [16a2e7c0-2b33-4d96-b1c7-d26436f46564] Pending
helpers_test.go:344: "mysql-58ccfd96bb-nxpl8" [16a2e7c0-2b33-4d96-b1c7-d26436f46564] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-nxpl8" [16a2e7c0-2b33-4d96-b1c7-d26436f46564] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003493081s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;": exit status 1 (200.600353ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 13:55:54.339339  659249 retry.go:31] will retry after 515.072837ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;": exit status 1 (187.655369ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 13:55:55.042918  659249 retry.go:31] will retry after 828.814698ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;": exit status 1 (299.04457ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 13:55:56.171183  659249 retry.go:31] will retry after 2.178157529s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;": exit status 1 (168.023394ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 13:55:58.518627  659249 retry.go:31] will retry after 2.117458157s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-625084 exec mysql-58ccfd96bb-nxpl8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/659249/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /etc/test/nested/copy/659249/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/659249.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /etc/ssl/certs/659249.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/659249.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /usr/share/ca-certificates/659249.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/6592492.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /etc/ssl/certs/6592492.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/6592492.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /usr/share/ca-certificates/6592492.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-625084 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh "sudo systemctl is-active crio": exit status 1 (247.125007ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-625084 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-625084
docker.io/kicbase/echo-server:functional-625084
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-625084 image ls --format short --alsologtostderr:
I0414 13:56:02.303675  667546 out.go:345] Setting OutFile to fd 1 ...
I0414 13:56:02.303870  667546 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:02.303892  667546 out.go:358] Setting ErrFile to fd 2...
I0414 13:56:02.303899  667546 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:02.304195  667546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
I0414 13:56:02.305117  667546 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:02.305288  667546 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:02.305879  667546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:02.305963  667546 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:02.323405  667546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
I0414 13:56:02.323965  667546 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:02.324634  667546 main.go:141] libmachine: Using API Version  1
I0414 13:56:02.324671  667546 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:02.325155  667546 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:02.325373  667546 main.go:141] libmachine: (functional-625084) Calling .GetState
I0414 13:56:02.327526  667546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:02.327572  667546 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:02.344110  667546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
I0414 13:56:02.344802  667546 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:02.345347  667546 main.go:141] libmachine: Using API Version  1
I0414 13:56:02.345372  667546 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:02.345782  667546 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:02.345992  667546 main.go:141] libmachine: (functional-625084) Calling .DriverName
I0414 13:56:02.346207  667546 ssh_runner.go:195] Run: systemctl --version
I0414 13:56:02.346234  667546 main.go:141] libmachine: (functional-625084) Calling .GetSSHHostname
I0414 13:56:02.349329  667546 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:02.349778  667546 main.go:141] libmachine: (functional-625084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:29:d2", ip: ""} in network mk-functional-625084: {Iface:virbr1 ExpiryTime:2025-04-14 14:52:39 +0000 UTC Type:0 Mac:52:54:00:b2:29:d2 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-625084 Clientid:01:52:54:00:b2:29:d2}
I0414 13:56:02.349807  667546 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined IP address 192.168.39.183 and MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:02.349957  667546 main.go:141] libmachine: (functional-625084) Calling .GetSSHPort
I0414 13:56:02.350166  667546 main.go:141] libmachine: (functional-625084) Calling .GetSSHKeyPath
I0414 13:56:02.350345  667546 main.go:141] libmachine: (functional-625084) Calling .GetSSHUsername
I0414 13:56:02.350510  667546 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/functional-625084/id_rsa Username:docker}
I0414 13:56:02.438115  667546 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0414 13:56:02.470837  667546 main.go:141] libmachine: Making call to close driver server
I0414 13:56:02.470850  667546 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:02.471206  667546 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:02.471254  667546 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:02.471279  667546 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:02.471312  667546 main.go:141] libmachine: Making call to close driver server
I0414 13:56:02.471324  667546 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:02.471652  667546 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:02.471644  667546 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:02.471677  667546 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-625084 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-625084 | d0104dcd09df6 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | b6a454c5a800d | 89.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.32.2           | d8e673e7c9983 | 69.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 85b7a174738ba | 97MB   |
| registry.k8s.io/kube-proxy                  | v1.32.2           | f1332858868e1 | 94MB   |
| docker.io/library/nginx                     | latest            | 4cad75abc83d5 | 192MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-625084 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-625084 | 416d641b679f8 | 1.24MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-625084 image ls --format table --alsologtostderr:
I0414 13:56:07.437333  667909 out.go:345] Setting OutFile to fd 1 ...
I0414 13:56:07.437470  667909 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:07.437489  667909 out.go:358] Setting ErrFile to fd 2...
I0414 13:56:07.437495  667909 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:07.437723  667909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
I0414 13:56:07.438342  667909 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:07.438460  667909 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:07.438830  667909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:07.438899  667909 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:07.455204  667909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
I0414 13:56:07.455788  667909 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:07.456347  667909 main.go:141] libmachine: Using API Version  1
I0414 13:56:07.456370  667909 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:07.456747  667909 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:07.457002  667909 main.go:141] libmachine: (functional-625084) Calling .GetState
I0414 13:56:07.458796  667909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:07.458860  667909 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:07.475468  667909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
I0414 13:56:07.476027  667909 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:07.476715  667909 main.go:141] libmachine: Using API Version  1
I0414 13:56:07.476749  667909 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:07.477161  667909 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:07.477373  667909 main.go:141] libmachine: (functional-625084) Calling .DriverName
I0414 13:56:07.477589  667909 ssh_runner.go:195] Run: systemctl --version
I0414 13:56:07.477622  667909 main.go:141] libmachine: (functional-625084) Calling .GetSSHHostname
I0414 13:56:07.480969  667909 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:07.481592  667909 main.go:141] libmachine: (functional-625084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:29:d2", ip: ""} in network mk-functional-625084: {Iface:virbr1 ExpiryTime:2025-04-14 14:52:39 +0000 UTC Type:0 Mac:52:54:00:b2:29:d2 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-625084 Clientid:01:52:54:00:b2:29:d2}
I0414 13:56:07.481625  667909 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined IP address 192.168.39.183 and MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:07.481800  667909 main.go:141] libmachine: (functional-625084) Calling .GetSSHPort
I0414 13:56:07.481987  667909 main.go:141] libmachine: (functional-625084) Calling .GetSSHKeyPath
I0414 13:56:07.482154  667909 main.go:141] libmachine: (functional-625084) Calling .GetSSHUsername
I0414 13:56:07.482324  667909 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/functional-625084/id_rsa Username:docker}
I0414 13:56:07.589939  667909 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0414 13:56:07.628084  667909 main.go:141] libmachine: Making call to close driver server
I0414 13:56:07.628105  667909 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:07.628445  667909 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:07.628479  667909 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:07.628497  667909 main.go:141] libmachine: Making call to close driver server
I0414 13:56:07.628510  667909 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:07.628534  667909 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:07.628787  667909 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:07.628801  667909 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:07.628823  667909 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-625084 image ls --format json --alsologtostderr:
[{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"69600000"},{"id":"4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"89700000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","re
poDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"97000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"416d641b679f887506a9967a61337d473ebac37c88967526f53a808bef121e7c","repoDigests":[],"repoTags":["localhost/my-image:functional-625084"],"size":"1240000"},{"id":"d0104dcd09df60a9183279de8bc6e4eb863a87b936c2f99c5a0603e3d162b0ce","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cac
he-test:functional-625084"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-625084"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"94000000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-625084 image ls --format json --alsologtostderr:
I0414 13:56:07.194536  667832 out.go:345] Setting OutFile to fd 1 ...
I0414 13:56:07.194872  667832 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:07.194893  667832 out.go:358] Setting ErrFile to fd 2...
I0414 13:56:07.194898  667832 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:07.195160  667832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
I0414 13:56:07.195808  667832 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:07.195915  667832 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:07.196278  667832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:07.196340  667832 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:07.212562  667832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
I0414 13:56:07.213077  667832 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:07.213665  667832 main.go:141] libmachine: Using API Version  1
I0414 13:56:07.213694  667832 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:07.214165  667832 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:07.214397  667832 main.go:141] libmachine: (functional-625084) Calling .GetState
I0414 13:56:07.216358  667832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:07.216413  667832 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:07.232442  667832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
I0414 13:56:07.233002  667832 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:07.233520  667832 main.go:141] libmachine: Using API Version  1
I0414 13:56:07.233543  667832 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:07.233878  667832 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:07.234116  667832 main.go:141] libmachine: (functional-625084) Calling .DriverName
I0414 13:56:07.234381  667832 ssh_runner.go:195] Run: systemctl --version
I0414 13:56:07.234419  667832 main.go:141] libmachine: (functional-625084) Calling .GetSSHHostname
I0414 13:56:07.237581  667832 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:07.238012  667832 main.go:141] libmachine: (functional-625084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:29:d2", ip: ""} in network mk-functional-625084: {Iface:virbr1 ExpiryTime:2025-04-14 14:52:39 +0000 UTC Type:0 Mac:52:54:00:b2:29:d2 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-625084 Clientid:01:52:54:00:b2:29:d2}
I0414 13:56:07.238047  667832 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined IP address 192.168.39.183 and MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:07.238150  667832 main.go:141] libmachine: (functional-625084) Calling .GetSSHPort
I0414 13:56:07.238339  667832 main.go:141] libmachine: (functional-625084) Calling .GetSSHKeyPath
I0414 13:56:07.238500  667832 main.go:141] libmachine: (functional-625084) Calling .GetSSHUsername
I0414 13:56:07.238638  667832 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/functional-625084/id_rsa Username:docker}
I0414 13:56:07.323935  667832 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0414 13:56:07.376947  667832 main.go:141] libmachine: Making call to close driver server
I0414 13:56:07.376967  667832 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:07.377279  667832 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:07.377301  667832 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:07.377312  667832 main.go:141] libmachine: Making call to close driver server
I0414 13:56:07.377319  667832 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:07.377550  667832 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:07.377585  667832 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:07.377602  667832 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-625084 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-625084
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "94000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d0104dcd09df60a9183279de8bc6e4eb863a87b936c2f99c5a0603e3d162b0ce
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-625084
size: "30"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "97000000"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "69600000"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "89700000"
- id: 4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-625084 image ls --format yaml --alsologtostderr:
I0414 13:56:02.527414  667571 out.go:345] Setting OutFile to fd 1 ...
I0414 13:56:02.527683  667571 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:02.527691  667571 out.go:358] Setting ErrFile to fd 2...
I0414 13:56:02.527695  667571 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:02.527949  667571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
I0414 13:56:02.528525  667571 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:02.528658  667571 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:02.529062  667571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:02.529125  667571 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:02.546573  667571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40399
I0414 13:56:02.547105  667571 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:02.547711  667571 main.go:141] libmachine: Using API Version  1
I0414 13:56:02.547739  667571 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:02.548231  667571 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:02.548485  667571 main.go:141] libmachine: (functional-625084) Calling .GetState
I0414 13:56:02.550550  667571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:02.550602  667571 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:02.566991  667571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44777
I0414 13:56:02.567697  667571 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:02.568268  667571 main.go:141] libmachine: Using API Version  1
I0414 13:56:02.568297  667571 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:02.568748  667571 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:02.568967  667571 main.go:141] libmachine: (functional-625084) Calling .DriverName
I0414 13:56:02.569210  667571 ssh_runner.go:195] Run: systemctl --version
I0414 13:56:02.569246  667571 main.go:141] libmachine: (functional-625084) Calling .GetSSHHostname
I0414 13:56:02.572399  667571 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:02.572793  667571 main.go:141] libmachine: (functional-625084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:29:d2", ip: ""} in network mk-functional-625084: {Iface:virbr1 ExpiryTime:2025-04-14 14:52:39 +0000 UTC Type:0 Mac:52:54:00:b2:29:d2 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-625084 Clientid:01:52:54:00:b2:29:d2}
I0414 13:56:02.572834  667571 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined IP address 192.168.39.183 and MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:02.573011  667571 main.go:141] libmachine: (functional-625084) Calling .GetSSHPort
I0414 13:56:02.573204  667571 main.go:141] libmachine: (functional-625084) Calling .GetSSHKeyPath
I0414 13:56:02.573351  667571 main.go:141] libmachine: (functional-625084) Calling .GetSSHUsername
I0414 13:56:02.573486  667571 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/functional-625084/id_rsa Username:docker}
I0414 13:56:02.654465  667571 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0414 13:56:02.688608  667571 main.go:141] libmachine: Making call to close driver server
I0414 13:56:02.688620  667571 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:02.688994  667571 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:02.689005  667571 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:02.689039  667571 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:02.689051  667571 main.go:141] libmachine: Making call to close driver server
I0414 13:56:02.689075  667571 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:02.689352  667571 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:02.689382  667571 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:02.689394  667571 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh pgrep buildkitd: exit status 1 (219.301371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image build -t localhost/my-image:functional-625084 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-625084 image build -t localhost/my-image:functional-625084 testdata/build --alsologtostderr: (4.007951149s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-625084 image build -t localhost/my-image:functional-625084 testdata/build --alsologtostderr:
I0414 13:56:02.963638  667633 out.go:345] Setting OutFile to fd 1 ...
I0414 13:56:02.963767  667633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:02.963778  667633 out.go:358] Setting ErrFile to fd 2...
I0414 13:56:02.963785  667633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:56:02.964001  667633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
I0414 13:56:02.964662  667633 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:02.965395  667633 config.go:182] Loaded profile config "functional-625084": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 13:56:02.965779  667633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:02.965838  667633 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:02.982199  667633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
I0414 13:56:02.982864  667633 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:02.983457  667633 main.go:141] libmachine: Using API Version  1
I0414 13:56:02.983504  667633 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:02.983870  667633 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:02.984108  667633 main.go:141] libmachine: (functional-625084) Calling .GetState
I0414 13:56:02.986006  667633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 13:56:02.986150  667633 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 13:56:03.009876  667633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
I0414 13:56:03.010488  667633 main.go:141] libmachine: () Calling .GetVersion
I0414 13:56:03.011106  667633 main.go:141] libmachine: Using API Version  1
I0414 13:56:03.011148  667633 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 13:56:03.011588  667633 main.go:141] libmachine: () Calling .GetMachineName
I0414 13:56:03.011846  667633 main.go:141] libmachine: (functional-625084) Calling .DriverName
I0414 13:56:03.012102  667633 ssh_runner.go:195] Run: systemctl --version
I0414 13:56:03.012140  667633 main.go:141] libmachine: (functional-625084) Calling .GetSSHHostname
I0414 13:56:03.015497  667633 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:03.016298  667633 main.go:141] libmachine: (functional-625084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:29:d2", ip: ""} in network mk-functional-625084: {Iface:virbr1 ExpiryTime:2025-04-14 14:52:39 +0000 UTC Type:0 Mac:52:54:00:b2:29:d2 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-625084 Clientid:01:52:54:00:b2:29:d2}
I0414 13:56:03.016332  667633 main.go:141] libmachine: (functional-625084) Calling .GetSSHPort
I0414 13:56:03.016394  667633 main.go:141] libmachine: (functional-625084) DBG | domain functional-625084 has defined IP address 192.168.39.183 and MAC address 52:54:00:b2:29:d2 in network mk-functional-625084
I0414 13:56:03.016544  667633 main.go:141] libmachine: (functional-625084) Calling .GetSSHKeyPath
I0414 13:56:03.016721  667633 main.go:141] libmachine: (functional-625084) Calling .GetSSHUsername
I0414 13:56:03.016902  667633 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/functional-625084/id_rsa Username:docker}
I0414 13:56:03.105528  667633 build_images.go:161] Building image from path: /tmp/build.908623596.tar
I0414 13:56:03.105616  667633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 13:56:03.118805  667633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.908623596.tar
I0414 13:56:03.127688  667633 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.908623596.tar: stat -c "%s %y" /var/lib/minikube/build/build.908623596.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.908623596.tar': No such file or directory
I0414 13:56:03.127736  667633 ssh_runner.go:362] scp /tmp/build.908623596.tar --> /var/lib/minikube/build/build.908623596.tar (3072 bytes)
I0414 13:56:03.167048  667633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.908623596
I0414 13:56:03.192327  667633 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.908623596 -xf /var/lib/minikube/build/build.908623596.tar
I0414 13:56:03.216435  667633 docker.go:360] Building image: /var/lib/minikube/build/build.908623596
I0414 13:56:03.216509  667633 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-625084 /var/lib/minikube/build/build.908623596
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:416d641b679f887506a9967a61337d473ebac37c88967526f53a808bef121e7c done
#8 naming to localhost/my-image:functional-625084 done
#8 DONE 0.1s
I0414 13:56:06.887901  667633 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-625084 /var/lib/minikube/build/build.908623596: (3.671364745s)
I0414 13:56:06.888010  667633 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.908623596
I0414 13:56:06.900828  667633 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.908623596.tar
I0414 13:56:06.918396  667633 build_images.go:217] Built localhost/my-image:functional-625084 from /tmp/build.908623596.tar
I0414 13:56:06.918440  667633 build_images.go:133] succeeded building to: functional-625084
I0414 13:56:06.918445  667633 build_images.go:134] failed building to: 
I0414 13:56:06.918477  667633 main.go:141] libmachine: Making call to close driver server
I0414 13:56:06.918489  667633 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:06.918857  667633 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:06.918879  667633 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 13:56:06.918881  667633 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:06.918895  667633 main.go:141] libmachine: Making call to close driver server
I0414 13:56:06.918906  667633 main.go:141] libmachine: (functional-625084) Calling .Close
I0414 13:56:06.919181  667633 main.go:141] libmachine: (functional-625084) DBG | Closing plugin on server side
I0414 13:56:06.919189  667633 main.go:141] libmachine: Successfully made call to close driver server
I0414 13:56:06.919206  667633 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.641797979s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-625084
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-625084 docker-env) && out/minikube-linux-amd64 status -p functional-625084"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-625084 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image load --daemon kicbase/echo-server:functional-625084 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image load --daemon kicbase/echo-server:functional-625084 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-625084
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image load --daemon kicbase/echo-server:functional-625084 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image save kicbase/echo-server:functional-625084 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image rm kicbase/echo-server:functional-625084 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-625084
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 image save --daemon kicbase/echo-server:functional-625084 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-625084
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-625084 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-625084 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-j6ltp" [ec73878f-f9ee-4098-b0d4-e962fe5090c8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-j6ltp" [ec73878f-f9ee-4098-b0d4-e962fe5090c8] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.16111761s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 service list -o json
functional_test.go:1511: Took "462.413001ms" to run "out/minikube-linux-amd64 -p functional-625084 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "284.160765ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "53.656047ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.183:30988
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "282.005113ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.651822ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdany-port1748730129/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744638959712593954" to /tmp/TestFunctionalparallelMountCmdany-port1748730129/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744638959712593954" to /tmp/TestFunctionalparallelMountCmdany-port1748730129/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744638959712593954" to /tmp/TestFunctionalparallelMountCmdany-port1748730129/001/test-1744638959712593954
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.186399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 13:55:59.940072  659249 retry.go:31] will retry after 371.955726ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 13:55 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 13:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 13:55 test-1744638959712593954
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh cat /mount-9p/test-1744638959712593954
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-625084 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [407263b3-1150-4dbe-8982-1b624436437b] Pending
helpers_test.go:344: "busybox-mount" [407263b3-1150-4dbe-8982-1b624436437b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [407263b3-1150-4dbe-8982-1b624436437b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [407263b3-1150-4dbe-8982-1b624436437b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003363258s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-625084 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdany-port1748730129/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.183:30988
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdspecific-port1235335646/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.896889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 13:56:07.480693  659249 retry.go:31] will retry after 375.420333ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdspecific-port1235335646/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh "sudo umount -f /mount-9p": exit status 1 (212.062315ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-625084 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdspecific-port1235335646/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3994606715/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3994606715/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3994606715/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T" /mount1: exit status 1 (264.203349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 13:56:09.170082  659249 retry.go:31] will retry after 616.094102ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-625084 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-625084 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3994606715/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3994606715/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-625084 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3994606715/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2025/04/14 13:56:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-625084
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-625084
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-625084
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (236.66s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-249524 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-249524 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m39.235697192s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-249524 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-249524 cache add gcr.io/k8s-minikube/gvisor-addon:2: (21.268558482s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-249524 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-249524 addons enable gvisor: (3.637854032s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [899e205b-71b4-4986-a9f9-2a440f01250f] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004471404s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-249524 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [936c2b6c-d56e-4446-a67e-b2fd8465d734] Pending
helpers_test.go:344: "nginx-gvisor" [936c2b6c-d56e-4446-a67e-b2fd8465d734] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0414 14:40:20.906352  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:40:32.135882  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-gvisor" [936c2b6c-d56e-4446-a67e-b2fd8465d734] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 40.003447331s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-249524
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-249524: (7.310950898s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-249524 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-249524 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (46.820761245s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [899e205b-71b4-4986-a9f9-2a440f01250f] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.002986002s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [936c2b6c-d56e-4446-a67e-b2fd8465d734] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.003814795s
helpers_test.go:175: Cleaning up "gvisor-249524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-249524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-249524: (1.181976025s)
--- PASS: TestGvisorAddon (236.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (220.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-422729 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0414 13:56:41.799701  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:58:57.937113  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:59:25.643473  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-422729 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m39.85010674s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (220.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-422729 -- rollout status deployment/busybox: (4.411998967s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-k85zz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-q9tjl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-wwhwf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-k85zz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-q9tjl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-wwhwf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-k85zz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-q9tjl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-wwhwf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-k85zz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-k85zz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-q9tjl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-q9tjl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-wwhwf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-422729 -- exec busybox-58667487b6-wwhwf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-422729 -v=7 --alsologtostderr
E0414 14:00:32.135556  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.142021  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.153533  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.174980  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.216512  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.298079  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.459689  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:32.781520  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:33.423636  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:34.705435  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:37.267495  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:42.389374  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:00:52.631405  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-422729 -v=7 --alsologtostderr: (1m2.102091499s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
E0414 14:01:13.113142  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-422729 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp testdata/cp-test.txt ha-422729:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3196591382/001/cp-test_ha-422729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729:/home/docker/cp-test.txt ha-422729-m02:/home/docker/cp-test_ha-422729_ha-422729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test_ha-422729_ha-422729-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729:/home/docker/cp-test.txt ha-422729-m03:/home/docker/cp-test_ha-422729_ha-422729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test_ha-422729_ha-422729-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729:/home/docker/cp-test.txt ha-422729-m04:/home/docker/cp-test_ha-422729_ha-422729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test_ha-422729_ha-422729-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp testdata/cp-test.txt ha-422729-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3196591382/001/cp-test_ha-422729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m02:/home/docker/cp-test.txt ha-422729:/home/docker/cp-test_ha-422729-m02_ha-422729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test_ha-422729-m02_ha-422729.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m02:/home/docker/cp-test.txt ha-422729-m03:/home/docker/cp-test_ha-422729-m02_ha-422729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test_ha-422729-m02_ha-422729-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m02:/home/docker/cp-test.txt ha-422729-m04:/home/docker/cp-test_ha-422729-m02_ha-422729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test_ha-422729-m02_ha-422729-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp testdata/cp-test.txt ha-422729-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3196591382/001/cp-test_ha-422729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m03:/home/docker/cp-test.txt ha-422729:/home/docker/cp-test_ha-422729-m03_ha-422729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test_ha-422729-m03_ha-422729.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m03:/home/docker/cp-test.txt ha-422729-m02:/home/docker/cp-test_ha-422729-m03_ha-422729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test_ha-422729-m03_ha-422729-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m03:/home/docker/cp-test.txt ha-422729-m04:/home/docker/cp-test_ha-422729-m03_ha-422729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test_ha-422729-m03_ha-422729-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp testdata/cp-test.txt ha-422729-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3196591382/001/cp-test_ha-422729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m04:/home/docker/cp-test.txt ha-422729:/home/docker/cp-test_ha-422729-m04_ha-422729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729 "sudo cat /home/docker/cp-test_ha-422729-m04_ha-422729.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m04:/home/docker/cp-test.txt ha-422729-m02:/home/docker/cp-test_ha-422729-m04_ha-422729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m02 "sudo cat /home/docker/cp-test_ha-422729-m04_ha-422729-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 cp ha-422729-m04:/home/docker/cp-test.txt ha-422729-m03:/home/docker/cp-test_ha-422729-m04_ha-422729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 ssh -n ha-422729-m03 "sudo cat /home/docker/cp-test_ha-422729-m04_ha-422729-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-422729 node stop m02 -v=7 --alsologtostderr: (12.642538708s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr: exit status 7 (643.155547ms)

                                                
                                                
-- stdout --
	ha-422729
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-422729-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422729-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-422729-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:01:40.387940  672651 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:01:40.388196  672651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:40.388206  672651 out.go:358] Setting ErrFile to fd 2...
	I0414 14:01:40.388210  672651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:01:40.388374  672651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 14:01:40.388534  672651 out.go:352] Setting JSON to false
	I0414 14:01:40.388567  672651 mustload.go:65] Loading cluster: ha-422729
	I0414 14:01:40.388961  672651 notify.go:220] Checking for updates...
	I0414 14:01:40.389915  672651 config.go:182] Loaded profile config "ha-422729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 14:01:40.389996  672651 status.go:174] checking status of ha-422729 ...
	I0414 14:01:40.390789  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.390874  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.407866  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0414 14:01:40.408363  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.409012  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.409040  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.409397  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.409635  672651 main.go:141] libmachine: (ha-422729) Calling .GetState
	I0414 14:01:40.411614  672651 status.go:371] ha-422729 host status = "Running" (err=<nil>)
	I0414 14:01:40.411634  672651 host.go:66] Checking if "ha-422729" exists ...
	I0414 14:01:40.411956  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.412008  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.427256  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0414 14:01:40.427758  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.428268  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.428292  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.428611  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.428828  672651 main.go:141] libmachine: (ha-422729) Calling .GetIP
	I0414 14:01:40.432018  672651 main.go:141] libmachine: (ha-422729) DBG | domain ha-422729 has defined MAC address 52:54:00:01:17:ca in network mk-ha-422729
	I0414 14:01:40.432728  672651 main.go:141] libmachine: (ha-422729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:17:ca", ip: ""} in network mk-ha-422729: {Iface:virbr1 ExpiryTime:2025-04-14 14:56:36 +0000 UTC Type:0 Mac:52:54:00:01:17:ca Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-422729 Clientid:01:52:54:00:01:17:ca}
	I0414 14:01:40.432752  672651 main.go:141] libmachine: (ha-422729) DBG | domain ha-422729 has defined IP address 192.168.39.76 and MAC address 52:54:00:01:17:ca in network mk-ha-422729
	I0414 14:01:40.432940  672651 host.go:66] Checking if "ha-422729" exists ...
	I0414 14:01:40.433247  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.433304  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.449162  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0414 14:01:40.449698  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.450332  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.450368  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.450755  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.450988  672651 main.go:141] libmachine: (ha-422729) Calling .DriverName
	I0414 14:01:40.451213  672651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:01:40.451243  672651 main.go:141] libmachine: (ha-422729) Calling .GetSSHHostname
	I0414 14:01:40.454577  672651 main.go:141] libmachine: (ha-422729) DBG | domain ha-422729 has defined MAC address 52:54:00:01:17:ca in network mk-ha-422729
	I0414 14:01:40.455079  672651 main.go:141] libmachine: (ha-422729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:17:ca", ip: ""} in network mk-ha-422729: {Iface:virbr1 ExpiryTime:2025-04-14 14:56:36 +0000 UTC Type:0 Mac:52:54:00:01:17:ca Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-422729 Clientid:01:52:54:00:01:17:ca}
	I0414 14:01:40.455110  672651 main.go:141] libmachine: (ha-422729) DBG | domain ha-422729 has defined IP address 192.168.39.76 and MAC address 52:54:00:01:17:ca in network mk-ha-422729
	I0414 14:01:40.455270  672651 main.go:141] libmachine: (ha-422729) Calling .GetSSHPort
	I0414 14:01:40.455482  672651 main.go:141] libmachine: (ha-422729) Calling .GetSSHKeyPath
	I0414 14:01:40.455661  672651 main.go:141] libmachine: (ha-422729) Calling .GetSSHUsername
	I0414 14:01:40.455859  672651 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/ha-422729/id_rsa Username:docker}
	I0414 14:01:40.535103  672651 ssh_runner.go:195] Run: systemctl --version
	I0414 14:01:40.540931  672651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:01:40.560007  672651 kubeconfig.go:125] found "ha-422729" server: "https://192.168.39.254:8443"
	I0414 14:01:40.560060  672651 api_server.go:166] Checking apiserver status ...
	I0414 14:01:40.560110  672651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:01:40.575247  672651 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1986/cgroup
	W0414 14:01:40.584413  672651 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1986/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:01:40.584471  672651 ssh_runner.go:195] Run: ls
	I0414 14:01:40.588982  672651 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:01:40.593095  672651 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:01:40.593123  672651 status.go:463] ha-422729 apiserver status = Running (err=<nil>)
	I0414 14:01:40.593134  672651 status.go:176] ha-422729 status: &{Name:ha-422729 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:01:40.593159  672651 status.go:174] checking status of ha-422729-m02 ...
	I0414 14:01:40.593455  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.593494  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.609857  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0414 14:01:40.610322  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.610699  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.610720  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.611065  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.611267  672651 main.go:141] libmachine: (ha-422729-m02) Calling .GetState
	I0414 14:01:40.612925  672651 status.go:371] ha-422729-m02 host status = "Stopped" (err=<nil>)
	I0414 14:01:40.612938  672651 status.go:384] host is not running, skipping remaining checks
	I0414 14:01:40.612955  672651 status.go:176] ha-422729-m02 status: &{Name:ha-422729-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:01:40.612972  672651 status.go:174] checking status of ha-422729-m03 ...
	I0414 14:01:40.613255  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.613297  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.629044  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I0414 14:01:40.629495  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.629981  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.630018  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.630376  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.630581  672651 main.go:141] libmachine: (ha-422729-m03) Calling .GetState
	I0414 14:01:40.632320  672651 status.go:371] ha-422729-m03 host status = "Running" (err=<nil>)
	I0414 14:01:40.632337  672651 host.go:66] Checking if "ha-422729-m03" exists ...
	I0414 14:01:40.632681  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.632724  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.648753  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I0414 14:01:40.649168  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.649633  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.649658  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.650005  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.650190  672651 main.go:141] libmachine: (ha-422729-m03) Calling .GetIP
	I0414 14:01:40.653123  672651 main.go:141] libmachine: (ha-422729-m03) DBG | domain ha-422729-m03 has defined MAC address 52:54:00:03:9c:5c in network mk-ha-422729
	I0414 14:01:40.653618  672651 main.go:141] libmachine: (ha-422729-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:9c:5c", ip: ""} in network mk-ha-422729: {Iface:virbr1 ExpiryTime:2025-04-14 14:58:50 +0000 UTC Type:0 Mac:52:54:00:03:9c:5c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-422729-m03 Clientid:01:52:54:00:03:9c:5c}
	I0414 14:01:40.653646  672651 main.go:141] libmachine: (ha-422729-m03) DBG | domain ha-422729-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:03:9c:5c in network mk-ha-422729
	I0414 14:01:40.653814  672651 host.go:66] Checking if "ha-422729-m03" exists ...
	I0414 14:01:40.654119  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.654162  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.669673  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0414 14:01:40.670189  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.670672  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.670694  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.671122  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.671359  672651 main.go:141] libmachine: (ha-422729-m03) Calling .DriverName
	I0414 14:01:40.671563  672651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:01:40.671585  672651 main.go:141] libmachine: (ha-422729-m03) Calling .GetSSHHostname
	I0414 14:01:40.674643  672651 main.go:141] libmachine: (ha-422729-m03) DBG | domain ha-422729-m03 has defined MAC address 52:54:00:03:9c:5c in network mk-ha-422729
	I0414 14:01:40.675161  672651 main.go:141] libmachine: (ha-422729-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:9c:5c", ip: ""} in network mk-ha-422729: {Iface:virbr1 ExpiryTime:2025-04-14 14:58:50 +0000 UTC Type:0 Mac:52:54:00:03:9c:5c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-422729-m03 Clientid:01:52:54:00:03:9c:5c}
	I0414 14:01:40.675193  672651 main.go:141] libmachine: (ha-422729-m03) DBG | domain ha-422729-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:03:9c:5c in network mk-ha-422729
	I0414 14:01:40.675348  672651 main.go:141] libmachine: (ha-422729-m03) Calling .GetSSHPort
	I0414 14:01:40.675521  672651 main.go:141] libmachine: (ha-422729-m03) Calling .GetSSHKeyPath
	I0414 14:01:40.675686  672651 main.go:141] libmachine: (ha-422729-m03) Calling .GetSSHUsername
	I0414 14:01:40.675839  672651 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/ha-422729-m03/id_rsa Username:docker}
	I0414 14:01:40.758387  672651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:01:40.776208  672651 kubeconfig.go:125] found "ha-422729" server: "https://192.168.39.254:8443"
	I0414 14:01:40.776238  672651 api_server.go:166] Checking apiserver status ...
	I0414 14:01:40.776270  672651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:01:40.793449  672651 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1834/cgroup
	W0414 14:01:40.802450  672651 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1834/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:01:40.802548  672651 ssh_runner.go:195] Run: ls
	I0414 14:01:40.806487  672651 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:01:40.811987  672651 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:01:40.812017  672651 status.go:463] ha-422729-m03 apiserver status = Running (err=<nil>)
	I0414 14:01:40.812030  672651 status.go:176] ha-422729-m03 status: &{Name:ha-422729-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:01:40.812066  672651 status.go:174] checking status of ha-422729-m04 ...
	I0414 14:01:40.812376  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.812423  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.828910  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39117
	I0414 14:01:40.829409  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.829876  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.829899  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.830290  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.830440  672651 main.go:141] libmachine: (ha-422729-m04) Calling .GetState
	I0414 14:01:40.832305  672651 status.go:371] ha-422729-m04 host status = "Running" (err=<nil>)
	I0414 14:01:40.832322  672651 host.go:66] Checking if "ha-422729-m04" exists ...
	I0414 14:01:40.832688  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.832739  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.851152  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0414 14:01:40.851608  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.852033  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.852102  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.852520  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.852713  672651 main.go:141] libmachine: (ha-422729-m04) Calling .GetIP
	I0414 14:01:40.856010  672651 main.go:141] libmachine: (ha-422729-m04) DBG | domain ha-422729-m04 has defined MAC address 52:54:00:5b:41:8c in network mk-ha-422729
	I0414 14:01:40.856468  672651 main.go:141] libmachine: (ha-422729-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:41:8c", ip: ""} in network mk-ha-422729: {Iface:virbr1 ExpiryTime:2025-04-14 15:00:25 +0000 UTC Type:0 Mac:52:54:00:5b:41:8c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-422729-m04 Clientid:01:52:54:00:5b:41:8c}
	I0414 14:01:40.856490  672651 main.go:141] libmachine: (ha-422729-m04) DBG | domain ha-422729-m04 has defined IP address 192.168.39.182 and MAC address 52:54:00:5b:41:8c in network mk-ha-422729
	I0414 14:01:40.856741  672651 host.go:66] Checking if "ha-422729-m04" exists ...
	I0414 14:01:40.857190  672651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:01:40.857247  672651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:01:40.873591  672651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0414 14:01:40.874078  672651 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:01:40.874554  672651 main.go:141] libmachine: Using API Version  1
	I0414 14:01:40.874579  672651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:01:40.875026  672651 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:01:40.875261  672651 main.go:141] libmachine: (ha-422729-m04) Calling .DriverName
	I0414 14:01:40.875487  672651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:01:40.875511  672651 main.go:141] libmachine: (ha-422729-m04) Calling .GetSSHHostname
	I0414 14:01:40.879027  672651 main.go:141] libmachine: (ha-422729-m04) DBG | domain ha-422729-m04 has defined MAC address 52:54:00:5b:41:8c in network mk-ha-422729
	I0414 14:01:40.879581  672651 main.go:141] libmachine: (ha-422729-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:41:8c", ip: ""} in network mk-ha-422729: {Iface:virbr1 ExpiryTime:2025-04-14 15:00:25 +0000 UTC Type:0 Mac:52:54:00:5b:41:8c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-422729-m04 Clientid:01:52:54:00:5b:41:8c}
	I0414 14:01:40.879612  672651 main.go:141] libmachine: (ha-422729-m04) DBG | domain ha-422729-m04 has defined IP address 192.168.39.182 and MAC address 52:54:00:5b:41:8c in network mk-ha-422729
	I0414 14:01:40.879772  672651 main.go:141] libmachine: (ha-422729-m04) Calling .GetSSHPort
	I0414 14:01:40.879969  672651 main.go:141] libmachine: (ha-422729-m04) Calling .GetSSHKeyPath
	I0414 14:01:40.880120  672651 main.go:141] libmachine: (ha-422729-m04) Calling .GetSSHUsername
	I0414 14:01:40.880302  672651 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/ha-422729-m04/id_rsa Username:docker}
	I0414 14:01:40.966673  672651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:01:40.982182  672651 status.go:176] ha-422729-m04 status: &{Name:ha-422729-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 node start m02 -v=7 --alsologtostderr
E0414 14:01:54.075491  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-422729 node start m02 -v=7 --alsologtostderr: (38.92216083s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-422729 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-422729 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-422729 -v=7 --alsologtostderr: (41.592884766s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-422729 --wait=true -v=7 --alsologtostderr
E0414 14:03:15.998043  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:03:57.934391  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:05:32.135870  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:05:59.839488  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-422729 --wait=true -v=7 --alsologtostderr: (3m19.436834888s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-422729
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-422729 node delete m03 -v=7 --alsologtostderr: (6.502768148s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (28.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-422729 stop -v=7 --alsologtostderr: (28.289206436s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr: exit status 7 (106.850614ms)

                                                
                                                
-- stdout --
	ha-422729
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422729-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422729-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:06:59.755474  675121 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:06:59.755720  675121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:06:59.755729  675121 out.go:358] Setting ErrFile to fd 2...
	I0414 14:06:59.755733  675121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:06:59.755939  675121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 14:06:59.756105  675121 out.go:352] Setting JSON to false
	I0414 14:06:59.756136  675121 mustload.go:65] Loading cluster: ha-422729
	I0414 14:06:59.756257  675121 notify.go:220] Checking for updates...
	I0414 14:06:59.756579  675121 config.go:182] Loaded profile config "ha-422729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 14:06:59.756607  675121 status.go:174] checking status of ha-422729 ...
	I0414 14:06:59.757114  675121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:06:59.757176  675121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:06:59.772271  675121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
	I0414 14:06:59.772745  675121 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:06:59.773393  675121 main.go:141] libmachine: Using API Version  1
	I0414 14:06:59.773422  675121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:06:59.773812  675121 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:06:59.774042  675121 main.go:141] libmachine: (ha-422729) Calling .GetState
	I0414 14:06:59.775792  675121 status.go:371] ha-422729 host status = "Stopped" (err=<nil>)
	I0414 14:06:59.775808  675121 status.go:384] host is not running, skipping remaining checks
	I0414 14:06:59.775817  675121 status.go:176] ha-422729 status: &{Name:ha-422729 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:06:59.775841  675121 status.go:174] checking status of ha-422729-m02 ...
	I0414 14:06:59.776330  675121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:06:59.776426  675121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:06:59.791566  675121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34119
	I0414 14:06:59.792016  675121 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:06:59.792431  675121 main.go:141] libmachine: Using API Version  1
	I0414 14:06:59.792453  675121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:06:59.792863  675121 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:06:59.793079  675121 main.go:141] libmachine: (ha-422729-m02) Calling .GetState
	I0414 14:06:59.795023  675121 status.go:371] ha-422729-m02 host status = "Stopped" (err=<nil>)
	I0414 14:06:59.795041  675121 status.go:384] host is not running, skipping remaining checks
	I0414 14:06:59.795049  675121 status.go:176] ha-422729-m02 status: &{Name:ha-422729-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:06:59.795089  675121 status.go:174] checking status of ha-422729-m04 ...
	I0414 14:06:59.795441  675121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:06:59.795495  675121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:06:59.811552  675121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0414 14:06:59.812003  675121 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:06:59.812458  675121 main.go:141] libmachine: Using API Version  1
	I0414 14:06:59.812479  675121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:06:59.812837  675121 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:06:59.813039  675121 main.go:141] libmachine: (ha-422729-m04) Calling .GetState
	I0414 14:06:59.814658  675121 status.go:371] ha-422729-m04 host status = "Stopped" (err=<nil>)
	I0414 14:06:59.814680  675121 status.go:384] host is not running, skipping remaining checks
	I0414 14:06:59.814689  675121 status.go:176] ha-422729-m04 status: &{Name:ha-422729-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (28.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (145.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-422729 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0414 14:08:57.935022  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-422729 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m24.45815795s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (145.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-422729 --control-plane -v=7 --alsologtostderr
E0414 14:10:21.005714  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:10:32.135346  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-422729 --control-plane -v=7 --alsologtostderr: (1m20.792764532s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-422729 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-737027 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-737027 --driver=kvm2 : (51.341845769s)
--- PASS: TestImageBuild/serial/Setup (51.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-737027
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-737027: (1.411733032s)
--- PASS: TestImageBuild/serial/NormalBuild (1.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-737027
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-737027
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-737027
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-449996 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-449996 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m28.96076961s)
--- PASS: TestJSONOutput/start/Command (88.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-449996 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-449996 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-449996 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-449996 --output=json --user=testUser: (12.621335051s)
--- PASS: TestJSONOutput/stop/Command (12.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-360197 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-360197 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.352445ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a8454bc9-d097-4e15-be23-5c6fc77d4dd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-360197] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a497b2cb-fa9c-4a07-8fea-08b7504b4bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20512"}}
	{"specversion":"1.0","id":"c47fe8ed-359e-4cee-9b7a-a2c156ddd0eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad86d6ee-6b1b-466f-b072-fbd6897a944c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig"}}
	{"specversion":"1.0","id":"afc47ffb-cf2c-4ffd-85ca-731685e9db1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube"}}
	{"specversion":"1.0","id":"591411f5-5ea9-4d37-9524-6c3ff7dfe794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d0c61c28-98ab-4775-92a2-47fe3cd2c5d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a4d2b4df-934b-425f-9a2a-17c5b963cd10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-360197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-360197
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (100.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-933478 --driver=kvm2 
E0414 14:13:57.939965  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-933478 --driver=kvm2 : (46.290903398s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-945187 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-945187 --driver=kvm2 : (50.888099203s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-933478
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-945187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-945187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-945187
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-945187: (1.022495211s)
helpers_test.go:175: Cleaning up "first-933478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-933478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-933478: (1.014895915s)
--- PASS: TestMinikubeProfile (100.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-192633 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0414 14:15:32.138097  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-192633 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.6190097s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-192633 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-192633 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-210178 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-210178 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.958909066s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210178 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210178 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-192633 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210178 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210178 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-210178
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-210178: (2.394009556s)
--- PASS: TestMountStart/serial/Stop (2.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-210178
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-210178: (23.881740863s)
--- PASS: TestMountStart/serial/RestartStopped (24.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210178 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210178 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (132.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185794 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0414 14:16:55.200877  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:18:57.934811  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185794 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m11.930653871s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (132.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-185794 -- rollout status deployment/busybox: (3.846655886s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-bmnzg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-p9rvb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-bmnzg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-p9rvb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-bmnzg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-p9rvb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-bmnzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-bmnzg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-p9rvb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185794 -- exec busybox-58667487b6-p9rvb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-185794 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-185794 -v 3 --alsologtostderr: (54.829215617s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-185794 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp testdata/cp-test.txt multinode-185794:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1514598436/001/cp-test_multinode-185794.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794:/home/docker/cp-test.txt multinode-185794-m02:/home/docker/cp-test_multinode-185794_multinode-185794-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m02 "sudo cat /home/docker/cp-test_multinode-185794_multinode-185794-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794:/home/docker/cp-test.txt multinode-185794-m03:/home/docker/cp-test_multinode-185794_multinode-185794-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m03 "sudo cat /home/docker/cp-test_multinode-185794_multinode-185794-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp testdata/cp-test.txt multinode-185794-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1514598436/001/cp-test_multinode-185794-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794-m02:/home/docker/cp-test.txt multinode-185794:/home/docker/cp-test_multinode-185794-m02_multinode-185794.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794 "sudo cat /home/docker/cp-test_multinode-185794-m02_multinode-185794.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794-m02:/home/docker/cp-test.txt multinode-185794-m03:/home/docker/cp-test_multinode-185794-m02_multinode-185794-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m03 "sudo cat /home/docker/cp-test_multinode-185794-m02_multinode-185794-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp testdata/cp-test.txt multinode-185794-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1514598436/001/cp-test_multinode-185794-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794-m03:/home/docker/cp-test.txt multinode-185794:/home/docker/cp-test_multinode-185794-m03_multinode-185794.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794 "sudo cat /home/docker/cp-test_multinode-185794-m03_multinode-185794.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 cp multinode-185794-m03:/home/docker/cp-test.txt multinode-185794-m02:/home/docker/cp-test_multinode-185794-m03_multinode-185794-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 ssh -n multinode-185794-m02 "sudo cat /home/docker/cp-test_multinode-185794-m03_multinode-185794-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-185794 node stop m03: (2.445833919s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185794 status: exit status 7 (435.808665ms)

                                                
                                                
-- stdout --
	multinode-185794
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-185794-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-185794-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr: exit status 7 (429.254807ms)

                                                
                                                
-- stdout --
	multinode-185794
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-185794-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-185794-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:20:12.703954  684104 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:20:12.704092  684104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:20:12.704112  684104 out.go:358] Setting ErrFile to fd 2...
	I0414 14:20:12.704116  684104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:20:12.704313  684104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 14:20:12.704515  684104 out.go:352] Setting JSON to false
	I0414 14:20:12.704554  684104 mustload.go:65] Loading cluster: multinode-185794
	I0414 14:20:12.704657  684104 notify.go:220] Checking for updates...
	I0414 14:20:12.705001  684104 config.go:182] Loaded profile config "multinode-185794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 14:20:12.705029  684104 status.go:174] checking status of multinode-185794 ...
	I0414 14:20:12.705601  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:12.705657  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:12.722965  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0414 14:20:12.723470  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:12.724025  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:12.724046  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:12.724528  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:12.724769  684104 main.go:141] libmachine: (multinode-185794) Calling .GetState
	I0414 14:20:12.726504  684104 status.go:371] multinode-185794 host status = "Running" (err=<nil>)
	I0414 14:20:12.726525  684104 host.go:66] Checking if "multinode-185794" exists ...
	I0414 14:20:12.726862  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:12.726902  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:12.743422  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0414 14:20:12.743928  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:12.744381  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:12.744411  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:12.744804  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:12.744982  684104 main.go:141] libmachine: (multinode-185794) Calling .GetIP
	I0414 14:20:12.747854  684104 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:20:12.748296  684104 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:02 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:20:12.748325  684104 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:20:12.748523  684104 host.go:66] Checking if "multinode-185794" exists ...
	I0414 14:20:12.748836  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:12.748882  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:12.765863  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0414 14:20:12.766273  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:12.766734  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:12.766756  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:12.767087  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:12.767281  684104 main.go:141] libmachine: (multinode-185794) Calling .DriverName
	I0414 14:20:12.767484  684104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:20:12.767508  684104 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
	I0414 14:20:12.770219  684104 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:20:12.770677  684104 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:02 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
	I0414 14:20:12.770702  684104 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
	I0414 14:20:12.770814  684104 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
	I0414 14:20:12.770998  684104 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
	I0414 14:20:12.771175  684104 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
	I0414 14:20:12.771351  684104 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
	I0414 14:20:12.850353  684104 ssh_runner.go:195] Run: systemctl --version
	I0414 14:20:12.856089  684104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:20:12.869923  684104 kubeconfig.go:125] found "multinode-185794" server: "https://192.168.39.164:8443"
	I0414 14:20:12.869966  684104 api_server.go:166] Checking apiserver status ...
	I0414 14:20:12.870007  684104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:20:12.888428  684104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1915/cgroup
	W0414 14:20:12.897906  684104 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1915/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:20:12.897972  684104 ssh_runner.go:195] Run: ls
	I0414 14:20:12.902258  684104 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0414 14:20:12.906401  684104 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0414 14:20:12.906433  684104 status.go:463] multinode-185794 apiserver status = Running (err=<nil>)
	I0414 14:20:12.906447  684104 status.go:176] multinode-185794 status: &{Name:multinode-185794 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:20:12.906470  684104 status.go:174] checking status of multinode-185794-m02 ...
	I0414 14:20:12.906891  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:12.906937  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:12.924023  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
	I0414 14:20:12.924538  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:12.925016  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:12.925046  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:12.925358  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:12.925563  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .GetState
	I0414 14:20:12.927224  684104 status.go:371] multinode-185794-m02 host status = "Running" (err=<nil>)
	I0414 14:20:12.927242  684104 host.go:66] Checking if "multinode-185794-m02" exists ...
	I0414 14:20:12.927633  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:12.927684  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:12.944138  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0414 14:20:12.944594  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:12.945142  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:12.945163  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:12.945487  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:12.945665  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .GetIP
	I0414 14:20:12.948367  684104 main.go:141] libmachine: (multinode-185794-m02) DBG | domain multinode-185794-m02 has defined MAC address 52:54:00:7e:15:aa in network mk-multinode-185794
	I0414 14:20:12.948804  684104 main.go:141] libmachine: (multinode-185794-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:15:aa", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:18:16 +0000 UTC Type:0 Mac:52:54:00:7e:15:aa Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:multinode-185794-m02 Clientid:01:52:54:00:7e:15:aa}
	I0414 14:20:12.948831  684104 main.go:141] libmachine: (multinode-185794-m02) DBG | domain multinode-185794-m02 has defined IP address 192.168.39.75 and MAC address 52:54:00:7e:15:aa in network mk-multinode-185794
	I0414 14:20:12.948997  684104 host.go:66] Checking if "multinode-185794-m02" exists ...
	I0414 14:20:12.949282  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:12.949322  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:12.965056  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I0414 14:20:12.965493  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:12.965883  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:12.965906  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:12.966226  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:12.966404  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .DriverName
	I0414 14:20:12.966557  684104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:20:12.966582  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .GetSSHHostname
	I0414 14:20:12.969201  684104 main.go:141] libmachine: (multinode-185794-m02) DBG | domain multinode-185794-m02 has defined MAC address 52:54:00:7e:15:aa in network mk-multinode-185794
	I0414 14:20:12.969583  684104 main.go:141] libmachine: (multinode-185794-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:15:aa", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:18:16 +0000 UTC Type:0 Mac:52:54:00:7e:15:aa Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:multinode-185794-m02 Clientid:01:52:54:00:7e:15:aa}
	I0414 14:20:12.969612  684104 main.go:141] libmachine: (multinode-185794-m02) DBG | domain multinode-185794-m02 has defined IP address 192.168.39.75 and MAC address 52:54:00:7e:15:aa in network mk-multinode-185794
	I0414 14:20:12.969747  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .GetSSHPort
	I0414 14:20:12.969903  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .GetSSHKeyPath
	I0414 14:20:12.970068  684104 main.go:141] libmachine: (multinode-185794-m02) Calling .GetSSHUsername
	I0414 14:20:12.970225  684104 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794-m02/id_rsa Username:docker}
	I0414 14:20:13.050955  684104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:20:13.064475  684104 status.go:176] multinode-185794-m02 status: &{Name:multinode-185794-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:20:13.064528  684104 status.go:174] checking status of multinode-185794-m03 ...
	I0414 14:20:13.064872  684104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:20:13.064915  684104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:20:13.081166  684104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0414 14:20:13.081714  684104 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:20:13.082282  684104 main.go:141] libmachine: Using API Version  1
	I0414 14:20:13.082306  684104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:20:13.082648  684104 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:20:13.082815  684104 main.go:141] libmachine: (multinode-185794-m03) Calling .GetState
	I0414 14:20:13.084384  684104 status.go:371] multinode-185794-m03 host status = "Stopped" (err=<nil>)
	I0414 14:20:13.084402  684104 status.go:384] host is not running, skipping remaining checks
	I0414 14:20:13.084410  684104 status.go:176] multinode-185794-m03 status: &{Name:multinode-185794-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 node start m03 -v=7 --alsologtostderr
E0414 14:20:32.135713  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-185794 node start m03 -v=7 --alsologtostderr: (41.717741575s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (189.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-185794
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-185794
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-185794: (27.40885102s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr
E0414 14:23:57.934385  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr: (2m41.89529206s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-185794
--- PASS: TestMultiNode/serial/RestartKeepsNodes (189.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-185794 node delete m03: (1.77956517s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-185794 stop: (24.849204691s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185794 status: exit status 7 (85.694604ms)

                                                
                                                
-- stdout --
	multinode-185794
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-185794-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185794 status --alsologtostderr: exit status 7 (87.690159ms)

                                                
                                                
-- stdout --
	multinode-185794
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-185794-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:24:32.208545  685919 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:24:32.208835  685919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:24:32.208848  685919 out.go:358] Setting ErrFile to fd 2...
	I0414 14:24:32.208855  685919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:24:32.209079  685919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
	I0414 14:24:32.209284  685919 out.go:352] Setting JSON to false
	I0414 14:24:32.209329  685919 mustload.go:65] Loading cluster: multinode-185794
	I0414 14:24:32.209387  685919 notify.go:220] Checking for updates...
	I0414 14:24:32.209743  685919 config.go:182] Loaded profile config "multinode-185794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0414 14:24:32.209774  685919 status.go:174] checking status of multinode-185794 ...
	I0414 14:24:32.210211  685919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:24:32.210299  685919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:24:32.225674  685919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0414 14:24:32.226143  685919 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:24:32.226615  685919 main.go:141] libmachine: Using API Version  1
	I0414 14:24:32.226640  685919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:24:32.227023  685919 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:24:32.227219  685919 main.go:141] libmachine: (multinode-185794) Calling .GetState
	I0414 14:24:32.228995  685919 status.go:371] multinode-185794 host status = "Stopped" (err=<nil>)
	I0414 14:24:32.229019  685919 status.go:384] host is not running, skipping remaining checks
	I0414 14:24:32.229028  685919 status.go:176] multinode-185794 status: &{Name:multinode-185794 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:24:32.229057  685919 status.go:174] checking status of multinode-185794-m02 ...
	I0414 14:24:32.229422  685919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0414 14:24:32.229469  685919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:24:32.244971  685919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0414 14:24:32.245342  685919 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:24:32.245851  685919 main.go:141] libmachine: Using API Version  1
	I0414 14:24:32.245874  685919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:24:32.246184  685919 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:24:32.246376  685919 main.go:141] libmachine: (multinode-185794-m02) Calling .GetState
	I0414 14:24:32.247887  685919 status.go:371] multinode-185794-m02 host status = "Stopped" (err=<nil>)
	I0414 14:24:32.247905  685919 status.go:384] host is not running, skipping remaining checks
	I0414 14:24:32.247913  685919 status.go:176] multinode-185794-m02 status: &{Name:multinode-185794-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-185794
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185794-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-185794-m02 --driver=kvm2 : exit status 14 (69.002572ms)

                                                
                                                
-- stdout --
	* [multinode-185794-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-185794-m02' is duplicated with machine name 'multinode-185794-m02' in profile 'multinode-185794'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185794-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185794-m03 --driver=kvm2 : (49.7648263s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-185794
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-185794: exit status 103 (186.668325ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-185794 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-185794"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-185794-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.87s)

                                                
                                    
x
+
TestPreload (189.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-352112 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0414 14:27:01.008662  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-352112 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m0.636856017s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-352112 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-352112 image pull gcr.io/k8s-minikube/busybox: (2.015503361s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-352112
E0414 14:28:57.939512  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-352112: (12.556968188s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-352112 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-352112 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (53.196245279s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-352112 image list
helpers_test.go:175: Cleaning up "test-preload-352112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-352112
--- PASS: TestPreload (189.55s)

                                                
                                    
x
+
TestScheduledStopUnix (127.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-516624 --memory=2048 --driver=kvm2 
E0414 14:30:32.138958  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-516624 --memory=2048 --driver=kvm2 : (55.460571952s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516624 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-516624 -n scheduled-stop-516624
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516624 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 14:30:53.977518  659249 retry.go:31] will retry after 60.813µs: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.978660  659249 retry.go:31] will retry after 83.013µs: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.979803  659249 retry.go:31] will retry after 262.817µs: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.980979  659249 retry.go:31] will retry after 453.54µs: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.982136  659249 retry.go:31] will retry after 400.086µs: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.983339  659249 retry.go:31] will retry after 618.994µs: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.984516  659249 retry.go:31] will retry after 1.654181ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.986754  659249 retry.go:31] will retry after 2.270794ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.990030  659249 retry.go:31] will retry after 2.918721ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.993267  659249 retry.go:31] will retry after 3.943004ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:53.997533  659249 retry.go:31] will retry after 6.402814ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:54.004810  659249 retry.go:31] will retry after 11.536181ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:54.017092  659249 retry.go:31] will retry after 7.442609ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:54.025408  659249 retry.go:31] will retry after 28.564312ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
I0414 14:30:54.054761  659249 retry.go:31] will retry after 21.991394ms: open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/scheduled-stop-516624/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516624 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516624 -n scheduled-stop-516624
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-516624
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516624 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-516624
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-516624: exit status 7 (73.32599ms)

                                                
                                                
-- stdout --
	scheduled-stop-516624
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516624 -n scheduled-stop-516624
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516624 -n scheduled-stop-516624: exit status 7 (69.208219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-516624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-516624
--- PASS: TestScheduledStopUnix (127.13s)

                                                
                                    
x
+
TestSkaffold (125.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3975901975 version
skaffold_test.go:63: skaffold version: v2.15.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-594771 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-594771 --memory=2600 --driver=kvm2 : (48.015974211s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3975901975 run --minikube-profile skaffold-594771 --kube-context skaffold-594771 --status-check=true --port-forward=false --interactive=false
E0414 14:33:35.204925  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:57.934455  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3975901975 run --minikube-profile skaffold-594771 --kube-context skaffold-594771 --status-check=true --port-forward=false --interactive=false: (1m4.623272866s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6b4bbf78fc-6slz8" [09076e71-08d2-4bcd-bd2f-5643582bcb5a] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004218956s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-74c5ff9865-g4sv7" [629209a5-869f-4523-9ef8-7ba8b7d6e8f3] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003278722s
helpers_test.go:175: Cleaning up "skaffold-594771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-594771
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-594771: (1.255311664s)
--- PASS: TestSkaffold (125.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (202.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3550420479 start -p running-upgrade-383136 --memory=2200 --vm-driver=kvm2 
E0414 14:35:32.135868  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3550420479 start -p running-upgrade-383136 --memory=2200 --vm-driver=kvm2 : (2m13.771732929s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-383136 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-383136 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m7.323666704s)
helpers_test.go:175: Cleaning up "running-upgrade-383136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-383136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-383136: (1.184323622s)
--- PASS: TestRunningBinaryUpgrade (202.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (185.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
E0414 14:39:39.944643  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m22.081186507s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-444824
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-444824: (3.32082225s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-444824 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-444824 status --format={{.Host}}: exit status 7 (82.608595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2 : (43.732240733s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-444824 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (102.809067ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-444824] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-444824
	    minikube start -p kubernetes-upgrade-444824 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4448242 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-444824 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-444824 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2 : (54.409858802s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-444824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-444824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-444824: (1.285058027s)
--- PASS: TestKubernetesUpgrade (185.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestPause/serial/Start (90.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-738000 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-738000 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m30.527559675s)
--- PASS: TestPause/serial/Start (90.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (180.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1318587445 start -p stopped-upgrade-177851 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1318587445 start -p stopped-upgrade-177851 --memory=2200 --vm-driver=kvm2 : (1m51.647445656s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1318587445 -p stopped-upgrade-177851 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1318587445 -p stopped-upgrade-177851 stop: (13.188570046s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-177851 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-177851 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (55.590075078s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (180.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-738000 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-738000 --alsologtostderr -v=1 --driver=kvm2 : (56.450979489s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069148 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-069148 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (69.458523ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-069148] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (68.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069148 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069148 --driver=kvm2 : (1m7.993617872s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-069148 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (68.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-738000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-738000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-738000 --output=json --layout=cluster: exit status 2 (268.841319ms)

                                                
                                                
-- stdout --
	{"Name":"pause-738000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-738000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-738000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-738000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-738000 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (36.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069148 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069148 --no-kubernetes --driver=kvm2 : (34.687877719s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-069148 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-069148 status -o json: exit status 2 (278.287669ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-069148","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-069148
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-069148: (1.046627488s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (36.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-177851
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-177851: (2.196269404s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069148 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069148 --no-kubernetes --driver=kvm2 : (49.247089428s)
--- PASS: TestNoKubernetes/serial/Start (49.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-069148 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-069148 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.321212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-069148
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-069148: (2.477019038s)
--- PASS: TestNoKubernetes/serial/Stop (2.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (96.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069148 --driver=kvm2 
E0414 14:38:57.935583  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:58.965746  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:58.972223  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:58.983806  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:59.005321  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:59.046896  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:59.128558  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:59.290291  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:59.612042  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:39:00.254401  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069148 --driver=kvm2 : (1m36.067471812s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (96.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-069148 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-069148 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.558007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-817380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0414 14:41:42.827964  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-817380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m31.033018765s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-284743 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-284743 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.2: (1m42.361367348s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-892953 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-892953 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.2: (1m43.452303629s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-946130 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.2
E0414 14:43:41.010548  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-946130 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.2: (1m15.280172382s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-284743 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [18e558d0-3152-4069-a373-b40f7e358ded] Pending
helpers_test.go:344: "busybox" [18e558d0-3152-4069-a373-b40f7e358ded] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [18e558d0-3152-4069-a373-b40f7e358ded] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004065478s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-284743 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-817380 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [708abb25-d2fc-4993-96be-ce6f46c0f603] Pending
helpers_test.go:344: "busybox" [708abb25-d2fc-4993-96be-ce6f46c0f603] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [708abb25-d2fc-4993-96be-ce6f46c0f603] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005374358s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-817380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-284743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0414 14:43:57.934976  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-284743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059657532s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-284743 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-817380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-817380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.130885709s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-817380 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-284743 --alsologtostderr -v=3
E0414 14:43:58.965445  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-284743 --alsologtostderr -v=3: (13.387103963s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-817380 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-817380 --alsologtostderr -v=3: (13.395537822s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284743 -n embed-certs-284743
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284743 -n embed-certs-284743: exit status 7 (78.322225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-284743 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (316.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-284743 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-284743 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.2: (5m15.80299167s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284743 -n embed-certs-284743
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (316.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-817380 -n old-k8s-version-817380
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-817380 -n old-k8s-version-817380: exit status 7 (77.393006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-817380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (553.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-817380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-817380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (9m12.762401188s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-817380 -n old-k8s-version-817380
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (553.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-946130 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6612b697-5c2a-4075-a8ea-04b1f3a8faed] Pending
helpers_test.go:344: "busybox" [6612b697-5c2a-4075-a8ea-04b1f3a8faed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6612b697-5c2a-4075-a8ea-04b1f3a8faed] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003591472s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-946130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-892953 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a28e50df-d2fa-4866-bfa7-dfac4a03c505] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0414 14:44:26.669383  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [a28e50df-d2fa-4866-bfa7-dfac4a03c505] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004825124s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-892953 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-946130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-946130 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-946130 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-946130 --alsologtostderr -v=3: (13.331239284s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-892953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-892953 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-892953 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-892953 --alsologtostderr -v=3: (13.331758552s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130: exit status 7 (70.173067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-946130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-946130 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-946130 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.2: (5m14.416572875s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892953 -n no-preload-892953
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892953 -n no-preload-892953: exit status 7 (94.911596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-892953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-892953 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.2
E0414 14:45:13.123887  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.130323  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.141736  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.163217  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.204725  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.286265  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.447892  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:13.769367  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:14.410691  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:15.692678  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:18.254855  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:23.376429  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:32.135787  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:33.618205  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:45:54.100080  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:46:35.061747  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:47:56.983799  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:48:57.934453  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:48:58.965078  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-892953 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.2: (5m28.792098753s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892953 -n no-preload-892953
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lx5rd" [309f8f8f-3d84-40e4-8471-d32e6968017f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lx5rd" [309f8f8f-3d84-40e4-8471-d32e6968017f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004359844s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lx5rd" [309f8f8f-3d84-40e4-8471-d32e6968017f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004283206s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-284743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-284743 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-284743 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284743 -n embed-certs-284743
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284743 -n embed-certs-284743: exit status 2 (250.364606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-284743 -n embed-certs-284743
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-284743 -n embed-certs-284743: exit status 2 (246.010436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-284743 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284743 -n embed-certs-284743
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-284743 -n embed-certs-284743
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-343449 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-343449 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.2: (1m5.850990653s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-k54vl" [e6e3b62a-e64b-4123-9269-f4ed0e50ce3e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003710569s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-k54vl" [e6e3b62a-e64b-4123-9269-f4ed0e50ce3e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004482257s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-946130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-946130 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-946130 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130: exit status 2 (253.030075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130: exit status 2 (252.279377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-946130 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-946130 -n default-k8s-diff-port-946130
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0414 14:50:13.123854  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:50:15.206343  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m6.511215309s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-s4ssh" [41c536b2-a901-41ae-b2a7-25d56c6039a7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004081782s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-s4ssh" [41c536b2-a901-41ae-b2a7-25d56c6039a7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003901845s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-892953 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-892953 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-892953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892953 -n no-preload-892953
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892953 -n no-preload-892953: exit status 2 (244.273293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-892953 -n no-preload-892953
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-892953 -n no-preload-892953: exit status 2 (248.157735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-892953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892953 -n no-preload-892953
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-892953 -n no-preload-892953
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E0414 14:50:32.135582  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:50:40.825491  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m27.926320141s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-343449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-343449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101932347s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-343449 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-343449 --alsologtostderr -v=3: (13.346426149s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-343449 -n newest-cni-343449
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-343449 -n newest-cni-343449: exit status 7 (68.147944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-343449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-343449 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-343449 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.2: (48.123104063s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-343449 -n newest-cni-343449
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-142288 "pgrep -a kubelet"
I0414 14:51:18.275829  659249 config.go:182] Loaded profile config "auto-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hbqs9" [32259d54-a212-4f2a-8836-8daf8b865ef5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hbqs9" [32259d54-a212-4f2a-8836-8daf8b865ef5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004966953s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m36.640749565s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-343449 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-343449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-343449 -n newest-cni-343449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-343449 -n newest-cni-343449: exit status 2 (325.364311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-343449 -n newest-cni-343449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-343449 -n newest-cni-343449: exit status 2 (304.161732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-343449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-343449 -n newest-cni-343449
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-343449 -n newest-cni-343449
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z6kgt" [32004e48-3f8b-4e53-8732-faa759c63f5e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003941215s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m52.025440536s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-142288 "pgrep -a kubelet"
I0414 14:52:06.005390  659249 config.go:182] Loaded profile config "flannel-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hw4l5" [d61cfed7-dbe9-4c39-a765-3a6028929ec2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hw4l5" [d61cfed7-dbe9-4c39-a765-3a6028929ec2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003914598s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (95.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m35.020455774s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (95.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zwmhn" [7c6e0746-5323-4850-9a9a-e48677841b76] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003976771s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-142288 "pgrep -a kubelet"
I0414 14:53:26.812547  659249 config.go:182] Loaded profile config "enable-default-cni-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-988gm" [dbfca26a-cf4b-4847-a249-4612d9d10f2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-988gm" [dbfca26a-cf4b-4847-a249-4612d9d10f2f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005301244s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zwmhn" [7c6e0746-5323-4850-9a9a-e48677841b76] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004194448s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-817380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-817380 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-817380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-817380 -n old-k8s-version-817380
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-817380 -n old-k8s-version-817380: exit status 2 (299.089284ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-817380 -n old-k8s-version-817380
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-817380 -n old-k8s-version-817380: exit status 2 (263.274328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-817380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-817380 -n old-k8s-version-817380
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-817380 -n old-k8s-version-817380
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.68s)
E0414 14:55:44.849181  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (90.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0414 14:53:49.258560  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.264955  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.276361  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.297785  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.339311  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.420868  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.582212  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:49.904125  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:50.547458  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:51.829512  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m30.712642899s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (90.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-142288 "pgrep -a kubelet"
I0414 14:53:52.815680  659249 config.go:182] Loaded profile config "bridge-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8nh4t" [114b6d6f-34b7-4eb7-a6c4-6479a2416f8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 14:53:54.391644  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-8nh4t" [114b6d6f-34b7-4eb7-a6c4-6479a2416f8f] Running
E0414 14:53:58.965135  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:53:59.516176  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003988023s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0414 14:53:57.934649  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/addons-404718/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m29.550555704s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vltm4" [e5bee851-f211-428c-81eb-b6e38f50f13f] Running
E0414 14:54:09.758147  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005283638s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-142288 "pgrep -a kubelet"
I0414 14:54:15.883678  659249 config.go:182] Loaded profile config "kindnet-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-f859z" [76d71be0-180b-4eca-82ba-26b24f99b9cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 14:54:17.012273  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.018663  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.030232  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.052255  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.093891  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.175176  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.337458  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:17.659798  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:18.301482  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:19.583796  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-f859z" [76d71be0-180b-4eca-82ba-26b24f99b9cf] Running
E0414 14:54:23.234026  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:23.555453  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:24.197366  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:25.479686  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:27.268396  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:28.041647  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004499332s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (109.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0414 14:54:22.909087  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:22.915499  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:22.927360  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:22.948831  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:22.990313  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:23.071780  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m49.334002383s)
--- PASS: TestNetworkPlugins/group/calico/Start (109.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (93.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0414 14:54:57.992335  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/default-k8s-diff-port-946130/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:55:03.886970  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/no-preload-892953/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:55:11.202583  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/old-k8s-version-817380/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-142288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m33.178774054s)
--- PASS: TestNetworkPlugins/group/false/Start (93.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-142288 "pgrep -a kubelet"
I0414 14:55:12.912266  659249 config.go:182] Loaded profile config "kubenet-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-142288 replace --force -f testdata/netcat-deployment.yaml
E0414 14:55:13.124093  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/gvisor-249524/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jgnvp" [8077cbd3-fcb4-47d0-9a94-d377fb9c7e64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jgnvp" [8077cbd3-fcb4-47d0-9a94-d377fb9c7e64] Running
E0414 14:55:22.030824  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/skaffold-594771/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.003419948s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-142288 "pgrep -a kubelet"
I0414 14:55:25.787634  659249 config.go:182] Loaded profile config "custom-flannel-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vmjnr" [f59db76e-b47d-46c5-bc19-324d0f67b434] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vmjnr" [f59db76e-b47d-46c5-bc19-324d0f67b434] Running
E0414 14:55:32.135484  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004533298s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w7vb4" [0dec8887-b9f2-49ef-bb3b-76ee19c47017] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004170521s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-142288 "pgrep -a kubelet"
I0414 14:56:18.142074  659249 config.go:182] Loaded profile config "calico-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wqbnw" [12cb76fe-1f11-4fc6-a9f6-64ce75173a94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 14:56:18.530598  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:18.537064  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:18.548516  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:18.569890  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:18.612173  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-wqbnw" [12cb76fe-1f11-4fc6-a9f6-64ce75173a94] Running
E0414 14:56:23.664051  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003671333s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-142288 "pgrep -a kubelet"
E0414 14:56:18.693974  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:18.856112  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
I0414 14:56:18.876306  659249 config.go:182] Loaded profile config "false-142288": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-142288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l28cd" [1e8da5b5-317a-4fb4-b7a9-edb1b9864a52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 14:56:19.177903  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:19.820085  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:56:21.102297  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-l28cd" [1e8da5b5-317a-4fb4-b7a9-edb1b9864a52] Running
E0414 14:56:28.785443  659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/auto-142288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.005584517s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (16.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-142288 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-142288 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14365319s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 14:56:44.258081  659249 retry.go:31] will retry after 1.459159158s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context false-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (16.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-142288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-142288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    

Test skip (34/344)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
214 TestKicCustomNetwork 0
215 TestKicExistingNetwork 0
216 TestKicCustomSubnet 0
217 TestKicStaticIP 0
249 TestChangeNoneUser 0
252 TestScheduledStopWindows 0
256 TestInsufficientStorage 0
260 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.16
299 TestNetworkPlugins/group/cilium 4.35
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-948278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-948278
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-142288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-142288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20512-652075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:38:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.111:8443
name: cert-expiration-014088
contexts:
- context:
cluster: cert-expiration-014088
extensions:
- extension:
last-update: Mon, 14 Apr 2025 14:38:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-014088
name: cert-expiration-014088
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-014088
user:
client-certificate: /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/cert-expiration-014088/client.crt
client-key: /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/cert-expiration-014088/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-142288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-142288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-142288"

                                                
                                                
----------------------- debugLogs end: cilium-142288 [took: 4.079970351s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-142288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-142288
--- SKIP: TestNetworkPlugins/group/cilium (4.35s)

                                                
                                    
Copied to clipboard